fuck it, i never ever do those “reblog for X, this one really works!” posts, but this one doesn’t have any of that BS, this is just straight up wishing us good things; and then the comment doesn’t even say any of that either. Zero claims on this post, all positive vibes
May you end this week feeling ever more certain of a future you’ll love
May you end this week feeling ever more certain of a future you’ll love
i saw some comments on tiktok where people were talking bout how they found tumblr too hard to use and part of it being that there was no lack of dates so “what if you reblog or like something from five years ago?!”
buddy… we have posts circulating still from 2011, its literally just how it is
Being on tumblr for years like:
this post is 2 years old and it’s only going to get funnier as it gets older
I love folklore so much because depending on the location and era it comes from it’s either the most terrifying concept or the dumbest thing you’ve ever heard
Mexican Folklore: You think this place is a Normal Location? Tch. You fool. Everyone knows this place is the SCARY Location.
British Folklore: There’s a little Beast in your house… make sure you give it the necessary porridge……. otherwise it might turn to mischief…….
German Folklore: For the love of God, do NOT trust hot people and do NOT trust babies and do NOT trust short men and do NOT trust Christmas and do NOT trust sausage and do NOT trust the elderly and
US Folklore: This Giant Boy From Texas Is God’s Favorite
Just on a whim, because I know that Alcibiades is one of the weirdest and funniest characters in ancient Greek history, I asked ChatGPT “What’s the weirdest thing Alcibiades ever did?”
ChatGPT came back with the details of something Alcibiades (henceforth referred to as ‘Alci’ so I don’t have to keep typing it out) was accused of, but acquitted of.
When I pointed out that he had been acquitted and may not have actually done this thing, Chat GPT apologised and said, “yes, he was acquitted”, and then went on to tell me that, nonetheless, the event was significant because it made Alci flee the city.
Alci did not flee the city, he was sent away on a military expedition, which was exactly what he’d wanted and asked for. When I pointed that out, ChatGPT apologised again for being wrong.
I asked again for weird things he might actually have done, and was told one version of a story I’ve heard before about how Alci stole some stuff from a friend. ChatGPT’s version was different from what I’d heard, though, so I mentioned that, and only then did ChatGPT acknowledge that there were different versions of the story. As part of its apology and correction, ChatGPT said that it did not always have access to all information - but then proceeded to provide details of the version of the story I’d heard before, showing that it did, in fact, have access to that information.
I asked again, what is the weirdest thing Alcibiades ever did? ChatGPT gave me an answer, which was a story I’d never heard before, so I asked for a source. ChatGPT told me it was in Plutarch’s Lives, and I presumed it was in his Life of Alcibiades, so that’s where I looked. When I said I couldn’t find it there, ChatGPT told me, sorry for not being specific, it was actually in Plutarch’s Life of Nicias. So I went and read Plutarch’s Life of Nicias and couldn’t find it.
So I told ChatGPT that I couldn’t find the story in that book, could it please be more specific? What I was hoping for was a chapter or page number or something, I just presumed I’d missed it.
ChatGPT came back with “no, actually it’s not in that book, it may be a later invention, there is no concrete evidence for this story.”
TL;DR: ChatGPT cannot be trusted. Even when it does give you a source, it can be wrong. It has no capacity to evaluate the accuracy or likely accuracy of the information it gives you. It will present you with wrong or debatable information and give you absolutely no indication that it may not be correct, or that other versions or interpretations are possible.
gotta remember that chat GPT works basically the same way autocomplete works, but it can autocomplete longer runs of reasonably coherent text.
it’s not looking up facts, its both trying to say the thing that’s most likely to come next in the text it was trained on, and also trying to not perfectly replicate the training text, because it’s supposed to be a bit creative.
what this means is that it’s actually primed to lie to you. you can feed it nothing but perfectly factual text and it will spit back lies because the truth replicates the training set too closely.
it’s not really capable of answering a question the way a person might.
what it does is generate text that reasonably seems like what an answer to that question might look like.
it’s a bullshit generator.
it is made to bullshit tech investors. (who exclusively talk by making up things that sound correct without regard for the actual truth) so, if you’re smarter than a venture capitalist then don’t fall for the bullshit meant to ensnare venture capitalists.
using tumblr mobile and seeing people talk about a desktop layout change is like hearing a timer suddenly start ticking down. I am safe for now but I hear the danger
blood being frequently described as having a “coppery smell” in fiction is kind of funny considering that there is a metallic component to blood and it’s not copper
in fact if your blood smells or tastes like copper you probably have more urgent things to worry about than it being outside your body. it’s probably better that it’s not inside you anymore actually.
story where blood is described as smelling or tasting “coppery” and it’s actually early foreshadowing that all the characters are suffering from heavy metal poisoning