![](https://pawb.social/pictrs/image/0cff6546-1691-4b4a-81e7-95182bc2bcc6.png)
![](https://lemmy.world/pictrs/image/a8207a32-daa2-4b31-aab4-2d684fc94d18.png)
I wanted to upvote this, but the score was too nice to change it…
I wanted to upvote this, but the score was too nice to change it…
Science is pushing the bounds of human knowledge. Science is only science if it propagates, otherwise it’s just someone’s discovery. Science has to be built upon, even if it’s disproven, that means it was documented well enough to be built upon. That’s not to say everything that’s disproven is science, because crackpot theories don’t often push the bounds of human knowledge.
I hope the brilliant students get their knowledge out there. (But that is unfortunately hard in academia. Despite us living in what should be a post knowledge scarcity society, we clearly aren’t.)
This is why the machine learning community will go through ArXiv for pretty much everything. We value open and honest communication and abhor knowledge being locked down. This is why he views things this way. Because he’s involved in a community that values real science.
ArXiv is free and all modern science should be open. There were reasons for publications in the past, since knowledge dissemination was hard, and they facilitated it. Now the publications just gatekeep.
This is a fair question. But also, we’re talking about one of the most influential minds in deep learning. If anything he’s selling himself short. He’s definitely not first author on most of them, but I would give all my limbs to work in his lab.
I’ve noticed a lot of things that are considered autistic in the states specifically may be normal practice in various cultures, having worked with people in Germany, and from a large swath of Asia.
It interests me a bit, but I think the takeaway is that autism tends to manifest in a number of quirks, and the ones that don’t align with the current culture the autistic person is in are the ones that are paid attention to. That and there tends to be a bit more obsession over said quirks than in those cultures, sometimes to the detriment of the autistic person or their social life.
Good luck. Don’t catch covid again if you can help it. Repeated exposure makes it worse. Pretty sure that’s where I’m at. At least I’ve been up to date with vaccines or it could have been much worse, likely.
https://en.m.wikipedia.org/wiki/Silverfish are also pests. They eat books.
There are times where intersex babies need surgery to prevent complications. For anything else, let them wait until they can decide. Agreed 100%.
In America, it appears to have started being in vogue during WWII as a way for single moms whose husband is overseas to have less to take care of. After a bit of coercion, my parents admitted the hospital did it without even their consent. That does sound a lot like [insert birth state here] in the [insert birth decade here] so I didn’t question it.
Man Skype used to be so good when it was peer to peer… I don’t see anything that MS brought to that platform that improved it at all.
I hate Slack Overflow (using Slack as documentation) but it beats the pants off of Discord Overflow.
I love discord, for what it’s for. Quick synchronous talks you will never refer back to again. So not software development where indexable logs of information are necessary. I know discord has indexing, and now some form of forum. But every discord I’ve been to for development (especially modding communities) has a large corpus of synchronous logs where people get annoyed if you ask a question that was answered one before a long time ago with extremely common language making it nearly impossible to search for because the keywords have been used out of context of your question hundreds of times since the question was asked.
If the Dev communities used the forums mode in discord more, it wouldn’t always solve it, but it’d be much better. There are better places than discord for these things, but I have been trying to meet people where they’re established.
Recalling data, communication. Two things humans are notoriously bad at…
And I wouldn’t call a human intelligent if TV was anything to go by. Unfortunately, humans do things they don’t understand constantly and confidently. It’s common place, and you could call it fake it until you make it, but a lot of times it’s more of people thinking they understand something.
LLMs do things confident that they will satisfy their fitness function, but they do not have the ability to see farther than that at this time. Just sounds like politics to me.
I’m being a touch facetious, of course, but the idea that the line has to be drawn upon that term, intelligence, is a bit too narrow for me. I prefer to use the terms Artificial Narrow Intelligence and Artificial General Intelligence as they are better defined. Narrow referring to it being designed for one task and one task only, such as LLMs which are designed to minimize a loss function of people accepting the output as “acceptable” language, which is a highly volatile target. AGI or Strong AI is AI that can generalize outside of its targeted fitness function and continuously. I don’t mean that a computer vision neural network that is able to classify anomalies as something that the car should stop for. That’s out of distribution reasoning, sure, but if it can reasonably determine the thing in bounds as part of its loss function, then anything that falls significantly outside can be easily flagged. That’s not true generalization, more of domain recognition, but it is important in a lot of safety critical applications.
This is an important conversation to have though. The way we use language is highly personal based upon our experiences, and that makes coming to an understanding in natural languages hard. Constructed languages aren’t the answer because any language in use undergoes change. If the term AI is to change, people will have to understand that the scientific term will not, and pop sci magazines WILL get harder to understand. That’s why I propose splitting the ideas in a way that allows for more nuanced discussions, instead of redefining terms that are present in thousands of ground breaking research papers over a century, which will make research a matter of historical linguistics as well as one of mathematical understanding. Jargon is already hard enough as it is.
The term AI is older than the idea of machine learning. AI is a rectangle where machine learning is a square. And deep learning is a unit square.
Please, don’t muddy the waters. That’s what caused the AI winter of 1960. But do go after the liars. I’m all for that.
The term AI is older than the idea of machine learning. AI is a rectangle where machine learning is a square. And deep learning is a unit square.
Please, don’t muddy the waters. That’s what caused the AI winter of 1960. But do go after the liars. I’m all for that.
… Alexa literally is A.I.? You mean to say that Alexa isn’t AGI. AI is the taking of inputs and outputting something rational. The first AI’s were just large if-else complications called First Order Logic. Later AI utilized approximate or brute force state calculations such as probabilistic trees or minimax search. AI controls how people’s lines are drawn in popular art programs such as Clip Studio when they use the helping functions. But none of these AI could tell me something new, only what they’re designed to compute.
The term AI is a lot more broad than you think.
/prəˈrɒgətɪv/ Huh. I guess usually when a schwa and a rhotic is involved, my dialect drops it. I pronounce it /prˈrɒgətɪv/ which could be romanized to pur-ROH-guh-tiv. But there’s no actual separation between the u and the r there.
True. Can we get it to 420 since we overshot 69?