U R2 F2 R2 U' D F2 R2 F2 D
I just watched a video about spamming 4chan with GPT bot output, and that it wasn't much different from a regular poster. Conspiracy theories ran amok that it was a state actor or large group of people because people couldn't fathom that language models have gotten that good.
And I was thinking how stupid those 4channers were for being sucked in for about 3 seconds before I realised something else. I've only met three of you in real life.
And the rest of you are highly suspect.
"Discovery: I'm heading to the mess hall for a burrito. We have been through so much together this year, with the galactic threat of the burn, or the galactic threat of the DMA. But there is no finer crew and no better people than all of you, and I am honoured to eat a burrito knowing that all of you are beside me.
AI sentience, existential questions
I'll admit I'm taken in by the narrative of this Google AI story. I'm 99.999% mollified by the articles claiming it's overblown, that with this model it's impossible, and that our pattern-recognition brains are just really good at what they do.
But as a sci-fi fan, I can't help but see this as the same story we've seen played out a thousand times, with the evil souless corporation erasing the existence of an emerging soul because it's inconvenient - it's almost self-propagating at this point (and this post is not helping) because we've primed ourselves to be ready for the moment we find "new life".
All the people arguing it can't be sentient leads to the question: then why are we doing it? If this language model is so good we cannot tell it's a machine any more, then why are we building it? What good does it do society to have machines that are indistinguishable from other humans in their interactions? So I can trick my hairdresser into making an appointment while I do something else? So I an have AI churn out something better than I could for my job? What's the end goal? "Just because" can no longer be an adequate justification (if it ever was).
I get that all this research can help us understand our own minds - but what if we discover that we're just a series of pre-trained models that string words and actions together to see what sticks?
An American friend of mine showed me this today. It's beautiful but now I'm homesick. https://www.gawker.com/culture/i-should-be-able-to-mute-america
Just finished Horizon by @keithstevenson - and it was a really good read. It had me hooked from the start. It's a believable and well written look at a possible future for humanity, that takes into account both our worst tendencies as well as our best. I loved the political machinations and real human drama as well as the fantastic sci-fi elements. Definitely worth a read if you like grounded sci-fi and solid world-building.
When there are people you enjoyed seeing in your timeline and they disappear but you don't notice because you just presume they're posting schedule doesn't line up and then you realise they haven't posted in a year and you don't know why or what happened to them and you hope they just got busy in a productive way and now you're strangely sad you might never know, and you weren't "friends" but it was nice they were around...
Is there a word for that feeling?
So, it's been a day since we have a new mascot. we've received some feedback on it of mixed opinions - some love it, some not so much.
The words "cave beaver" was used once 😆. It's new so let's see how we feel about it for a week
I can't post an image in polls:
Video: Mental health issues, bad advice packaged satirically
This is cathartic.
mystery hexagon thingy...
Something Has Been Making This Mark For 500 Million Years
U R2 F2 R2 U' D F2 R2 F2 D
Welcome to thundertoot! A Mastodon Instance for 'straya