So is your comment. And mine. What do you think our brains do? Magic?
edit: This may sound inflammatory but I mean no offense
So is your comment. And mine. What do you think our brains do? Magic?
edit: This may sound inflammatory but I mean no offense
I’m vaguely aware of Org-mode but only as an alternative to Markdown. Last time I looked into it, though (years ago), Markdown seemed like a much better option for me for various reasons. Do you have a good argument for why Org-mode is a better choice for common use cases than the relatively universal GitHub-flavored Markdown?
Hey, appreciate the update. That’s really too bad!
I’ve been using Kagi. It works well. I like it. Costs money, but that’s a positive in my book.
Okay, thanks for the explanation. Maybe I will keep watching, then. That gives me a little hope!
Ugh, has the second season gotten better? I watched the first two episodes of the second season and was really disappointed… enough that I stopped watching. I didn’t mind that they veered so far from the book the first season, because it was inevitable and they did a great job capturing the feeling.
But the second season is just bonkers and lots of sloppy writing so far. Totally unbelievable stunts for no reason other than suspense (that underwater scene and the mouth-to-mouth rebreathing, for example, was so stupid, and then they sit down and they’re like “phew, anyway”) and suddenly Hari is a split-consciousness main character and there’s forward time travel and no second foundation and two different types of non-psychohistory-developed psychic abilities and WE SEE THE IDENTITY OF THE MULE? Like, come on. In just two episodes they trashed some of the most compelling/thematic material and plot points of the original and turned it into a space-magic grab bag of action tropes.
I’m mostly just salty. Perfectly fine if you enjoy it personally. But maybe some of these points resonate with you and, knowing them, you can convince me to keep watching? Because I did really like the first season.
I agree with you, but why are you disparaging kbin? Plenty of good discussion here, and a good community.
I see this complaint a lot but honestly I don’t quite understand what the big deal is. Not everyone is subscribed to the same communities. Personally, I’d love a feature on kbin/lemmy that rolled up duplicate posts on the client, but it’s really not that annoying for me to see a couple dupes in my feed if they’re posted in relevant communities /shrug
Ever since Obama beat Clinton 15 years ago
Jesus I thought you were exaggerating and then I did the math
Hey, this is excellent. I was looking to do something like this a few months ago. Bought a few ESP devices to mess with, but never got around to it. I might try it out now, though, using your guide. Thank you!
Got a source? When I first read about this people were cautiously optimistic partly because the head researcher was well-respected.
our compound shows greatly consistent x-ray diffraction spectrum with the previously reported structure data
Uhh, doesn’t look like it to me. This paper’s X-ray diffraction spectrum looks pretty noisy compared to the one from the original paper, with some clear additional/different peaks in certain regions. That could potentially affect the result. I was under the impression from the original paper that a subtle compression of the lattice structure was pretty important to formation of quantum wells for superconductivity, so if the X-ray diff isn’t spot on I’ll wait for some more failures before calling it busted.
This is a really terrific explanation. The author puts some very technical concepts into accessible terms, but not so far from reality as to cloud the original concepts. Most other attempts I’ve seen at explaining LLMs or any other NN-based pop tech are either waaaay oversimplified, heavily abstracted, or are meant for a technical audience and are dry and opaque. I’m saving this for sure. Great read.
Fair enough!
I’m not saying this to be an asshole, because I’m happy that you got to the right conclusion eventually, but I have to clarify for history’s sake: if you thought Trump was playing 4D chess in 2015-2016 then you were being duped. Most of us understood what he was from the get-go. Claims of 4D chess have always been stupid.
Again, I’m happy that you figured it out. Everyone makes mistakes. But “we” didn’t think he was playing 4D chess. The hypothesis about Musk/Twitter above is hardly the same.
I honestly only made it a few minutes in, and there is probably plenty of merit to the rest of her perspective. But… I just couldn’t get past the “AI doesn’t exist” part. I get that you don’t know or care about the difference and you associate the term “AI” with sci-fi-like artificial sentience/AGI, but “AI” has been used for decades to refer to things that mimic intelligence, not just full-on artificial general intelligence. Algorithms governing NPC behavior and pathfinding in video games is AI, and that’s a perfectly accurate description. SmarterChild was AI… even ELIZA was AI. Stuff like GAN models and LLMs are certainly AI. The goal posts for “intelligence” have moved farther and farther back with every innovation. The AI we have now was fantasy just 20 years ago. Even just five years ago, to most people.
That’s not really how LLMs work. You’re basically describing Markov chains. The statement “It’s just a statistical prediction model with billions of parameters” also applies to the human brain. An LLM is much more of a black box than you’re implying.
Collar.