Rendered at 09:03:55 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
beloch 1 days ago [-]
"This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "
-------------
Who is doing the research matters. What is presented here is not the product of academia. It's the product of a company that produces AI agents. The picture this web page paints may appear rosy and have just enough thorns to be convincing, but it's the equivalent of a tobacco company telling you that their product is neither addictive or carcinogenic.
I fully expect actual research will be done on the impact of AI and our hopes for it. This page, however, is marketing.
mickael-kerjean 21 hours ago [-]
Anthropic are masters in marketing to make people think they’re here to do good. A few weeks ago, they got great visibility on HN promising Claude Max 20x accounts to people who are active in open source repositories with at least 5k stars on GitHub [1]. My main project [2] has more than double the minimum requirements, and I’m still waiting.
I just checked your projects, it looks just like something I was looking for. And I hope in a few weeks the guys from Anthropic will give you what they promised.
However, since we're frank here, I'd say I'll download the most recent release and be very careful about upgrading because I don't put much trust in projects co-created with LLMs. I know there is a full spectrum but I've seen enough and I don't have the resources to check where on the spectrum your project ends up. LLMs are a powerful drug and terribly hard to stop once you start.
mickael-kerjean 5 hours ago [-]
> I don't put much trust in projects co-created with LLMs
Its not, there is history since 2017 and I've been working on this full time
fasterik 21 hours ago [-]
Humans are complex. It's possible for someone to want to do good and at the same time want to promote/market their product and make a profit. I don't see a contradiction there.
mickael-kerjean 21 hours ago [-]
How do you call a marketing campaign that does not deliver on what it promised? I have no problem with anthropic trying to create good will around their products but this particular campain aiming to find good will around people doing open source was an outright lie that did not deliver what it promised and this was all done on HN.
When a company lies for something that trivial, it does not inspire trust
fasterik 20 hours ago [-]
It's an outright lie because they haven't greenlit your personal project after two weeks? Did it occur to you that maybe they just got a lot of applications and are prioritizing other projects or still working through a backlog?
3371 17 hours ago [-]
They would be 100% lying if they have infinite budget allocated to this campaign and haven't approved all requests.
Aerolfos 24 hours ago [-]
> "This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "
Also AI written, but I suppose that's expected. The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
taurusnoises 22 hours ago [-]
I'd love to be able to actually articulate what makes AI writing read like AI writing. A few of the common tells come to mind (contrast construction, hyperbole, overuse / wrongly used em-dashes, etc). The above quote doesn't have any of that, and yet it certainly feels AI. The first sentence (both what it says and where it's placed) suggest AI to me. But, I couldn't quite tell you why.
nlawalker 19 hours ago [-]
Before AI this style of prose was called "thank you for coming to my TED talk", with a little bit of "LinkedIn broetry". Confident assertions and pat explanations about truths that will make you a better person upon internalization; a pop psychologist convincing you of an unintuitive and surprising new idea about how the universe works that catches you off guard but then turns your perception on its head and revolutionizes the way you see the world. Contemporary marketing speak of a particular "coolly subverting your expectations and injecting the truth straight into your veins" flavor.
Aerolfos 17 hours ago [-]
It is a style that AI (intentionally?) emulates for sure, though the "regression to the mean" and general vagueness seems to be what really separates the classic TED talk/puffy blog from AI. Humans like specific examples and anecdotes, AI fails at making those.
Jensson 22 hours ago [-]
I think the main tell is that it says basically nothing, it reads like a human that is paid per word. Humans prefer easy to read articles that doesn't hide the point behind such fluff, so there is no reason to do it except just to spam words.
monegator 22 hours ago [-]
> it reads like a human that is paid per word
that's essentially it. But not only that, we learned to distinguish things written by humans for humans, and things written by humans (paid by the word) for SEO. LLMs tend to produce text that would be great for SEO, so it stands out as not for humans
aurmc 16 hours ago [-]
Wikipedia has an excellent article about exactly this [1], in their editor information section. There's a section called "Undue emphasis on significance, legacy, and broader trends" that provides some examples:
>Words to watch: stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted, ...
Once I read this, it started sticking out to me all the time.
> The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
Investors want to see you use your own product, if they themselves don't feel the product is good enough to write their own announcement then investors would worry about their future.
And AI is still a product primarily aimed at investors and not consumers.
matteomrj 15 hours ago [-]
I think it's still nice that they do this kind of research on the side. Hopefully people will take it for what it is: a research done by a company being in a clear conflict of interest about the subject.
ngc248 18 hours ago [-]
They probably surveyed their own Agents
vanillameow 1 days ago [-]
I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?
"I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle."
"Relaxing while my AI gets the work done, builds the wealth. It’s a shadow of me, just a very, very long one."
etc. I do believe AI currently accelerates businesses, especially in software dev. We work with a contractor who use Claude Code to reach incredible development pace for the size of their team, but also when we sit down with them in meetings they understand what's being created, they are able to argue their architectural choices, and they know how to propose business value.
You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then
a) everyone can do it, so you won't actually have any value to propose, and
b) Once the AI can run businesses without humans in the loop, you can bet your ass they will not out of the goodness of their hearts keep giving that ability away for $20.
In summary, AI if used to accelerate businesses _CAN_ be good. Buying it as a magic bullet to bring you out of poverty is probably a worse choice than just buying a lottery ticket.
eloisant 1 days ago [-]
That really reminds me of the "mashup" bubble in the late 2000's, when all services started to provide API and people were calling themselves "entrepreneurs" for combining 2 sources of data, like putting craigslist ads on a map.
That didn't last long!
darkwater 1 days ago [-]
Are you sure? We have many SaaS and final products which are just stitching together more SaaS. We have a very vocal part of the HN community always reminding you to buy a SaaS solution and connect it to your business instead of maintaining an in-house bespoke solution.
nsnzjznzbx 23 hours ago [-]
Isn't almost everyone doing that. Deploy docker to AWS connecting to Slack, Open AI and Anthropic to do X Y Z.
agos 23 hours ago [-]
that's like saying my job is to transfer money from my employer to the homeowner. Technically true but something else happens in the process
nsnzjznzbx 12 hours ago [-]
So you saying mashups were literally just connecting 2 things and selling it.
vbezhenar 22 hours ago [-]
I think that there's a "time window" right now, before most people realized the scale of AI. Those who jump there first, can monetize it. It certainly won't last forever, but you can earn some money while it lasts. And you will have years of AI-relevant experience afterwards.
vanillameow 19 hours ago [-]
Not incorrect, but it honestly borders on grifting a lot of the time imo. At least it's a spectrum. If you are supercharging your existing technical and domain knowledge, and actually caring about the security of your customers while doing so, fair play. That is real entrepreneurship.
Then there's people who are "well intentioned", I guess, but lack the technical knowledge. A friend of a friend with no technical background is selling websites to companies that he writes with Claude. They look shiny, everyone's happy in the short run, but I don't doubt issues will come up down the line that someone will have to be responsible for. I'd personally feel like I was ripping people off doing this, but I think also Dunning-Kruger prevents you from knowing any better if you are the type of person doing this.
Then there's the whole B2B SaaS gang that are basically just producing vaporware and telling other people how to produce more vaporware. This is no different from crypto, NFTs etc. before it really. Just people trying to hustle others.
And then there's the whole clawdbot gang probably burning more in tokens everyday than normal people use in a month so they can sort 18 e-mails.
So yeah I mean you're right, there certainly is a subset of people who are using this ethically (as ethically as you can use LLMs but that's another story) to make some money on the side. Certainly not the majority though I'd say.
freefaler 24 hours ago [-]
If the technology becomes cheaper, this creates more market pressure, by changing the cost base of certain product. For example books when printing press was invented went from luxury to something expensive but more affordable. In software markets that means that will have more software, more competition and in free market segments profits will evaporate.
The pseudo "entrepreneurs" who think they could outsmart the market by working less, are just naive. In a free market economy optimization is brutal and a freelancer developer will sell the same "product" cheaper, because he has the same technology available to him.
So the only way to get the gains from these AI technologies is to have something that can't be easily copied like market knowledge, data access or sweetheart deals with big companies that can pay more because their profits support the higher spend.
Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation. But the margins will go waaay down. 25$ for a set of forms and a database, not gonna cut it anymore.
vanillameow 22 hours ago [-]
> Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation.
True in the current state of LLMs, possibly not true forever if someone finds the magic bullet that turns the one-shotting (reliable) software dream that companies like Anthropic and Perplexity currently peddle into reality. Seems far-fetched ATM but the gains since GPT-2 have been very real.
We're quite a ways away from this though, even with Opus 4.6 and the like. And even further from it being part of Claude Code rather than some proprietary $1000/mo. closed-source solution.
As you say though, _if_ such a technology were to exist, it's Anthropic that holds all the cards, not random entrepreneur #25721 who is asking the Anthropic API the same thing that the actual customer could just be asking directly. At that point you're an undesirable middleman, not a business.
lm28469 24 hours ago [-]
> I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?
Fake it till you make it mentality that degenerated completely once we got the internet. It used to be "crypto will make you rich, buy my coin/course", now it's "AI will make you rich buy my tool/course", the same type of people will get fleeced
That’s what I really gets me. These folks who are “so rich from said technology” always need you to buy their course for $5,000… Likes buddy if you were bringing in so much money you probably wouldn’t be pestering people to take your “course” and you certainly aren’t going to give any info away that have value only because they are obscure or hard to do… They are also almost ALWAYS self proclaimed experts. Oversight everyone because an AI expert. Before ChatGPT they probably had zero AI was a large field and machine learning is one small part of it..
keiferski 1 days ago [-]
It’s funny how so much of market demand ends up just ends up boiling down to basic needs. Everyone’s always trying to hustle so they don’t have to worry about financial instability.
The quote about being temporarily embarrassed millionaires comes to mind….
1 days ago [-]
gavinray 13 hours ago [-]
> You can't just buy a Claude subscription and have magically solve your problems.
Devil's advocate: The operating intelligence of Opus 4.6 is higher than the average persons and has orders of magnitude more domain knowledge.
If Average Joe were to delegate most of their life decisions to the chatbot, it'd probably turn out better, or in the worst case, more informed.
array_key_first 8 hours ago [-]
> The operating intelligence of Opus 4.6 is higher than the average persons
Okay how are we measuring this? We can't even quantify intelligence for humans accurately, let alone compare it to machines. Hell, we can't even really define intelligence.
I mean, humans can learn on the fly and progressively, and currently no LLMs are capable of that. Literally none of them, and no context doesn't count. So if that's the measure, then LLMs sit at a 0 along with rocks and twigs and humans closer to a 1.
Obviously that's not really the measurement, LLMs are quite good. But I don't think we can say, for sure, LLMs are a replacement for humans. They might replace some specific tasks, but humans are not a set of tasks. I'd still rather have 10 engineers than 0 engineers and 10 Claude Code licenses.
judahmeek 10 hours ago [-]
> If Average Joe were to delegate most of their life decisions to the chatbot, it'd probably turn out better, or in the worst case, more informed.
Even if true, no one is ever going to do that because of Dunning Kruger, so it's still not magically solving problems.
bossyTeacher 15 hours ago [-]
> You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then a) everyone can do it, so you won't actually have any value to propose
Louder for those at the back.
nsnzjznzbx 23 hours ago [-]
A great AI future is the robots doing stuff so we can be free. But none of the major isms are geared up to provide that i.e. capitalism or communism. Maybe hackable with UBI and capitalism mix.
"I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
"I live in a war zone... AI can not only give practical advice, but also emotionally calm me down during panic attacks. It can calm someone during a missile attack in one chat, and laugh with me about something silly in another. That’s what makes it not fragmented into a therapist/teacher/friend, but something whole." Ukraine
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
"The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
Frieren 1 days ago [-]
> "The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
> which came back 6 times higher than its supposed to be.
It has been proven that massive testing creates many false positives.
Tests may not be as reliable as though but they are good enough when other symptoms are accounted for. To randomly test people based on AI hallucinations can increase the number of unnecessary medication or even interventions.
Gareth321 24 hours ago [-]
> I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
This is a competition of public and private interests. A sick individual is going to lobby for tests until they discover the cause. From a public perspective, it might be cheaper to just let them die. AI is an advocate for the individual.
For the record, ChatGPT helped me diagnose a lifelong illness. I'm a new man now thanks to AI. Literally life changing. I had spent decades pleading for tests because no one could figure out the cause. I think a likely outcome here is not necessarily 10,000x more tests performed, but similar or even fewer tests, because the diagnosis success rate with AI is higher. It's not subject to bias. People tend to be more honest and reflective with their AI than they are with doctors. They get 5 minutes to give the entire case to the doctor. With an AI they can spend weeks debating and reflecting. This builds a case history far more detailed and accurate than anything we have in modern medicine today. Amplified by an order of magnitude because the AI can extract meaningful insights from the discussion.
In the very near future our AI will contact our GP for us. Soon after that, our GP will be our AI.
etrautmann 20 hours ago [-]
I’m not sure how you can come to the conclusion that AI is an advocate for the individual writ large. It seems that AI can just as easily be used to make algorithmic decisions on who receives care (based on symptoms etc). Whether or not that’s an equalizing influence or not depends on the algorithm, training data, etc.
Frieren 20 hours ago [-]
> From a public perspective, it might be cheaper to just let them die.
You missed the point. More tests can be detrimental to the patient's health as increase the risk of unneeded medication or surgery. Also many test like x-rays have their own risks. To do them for the sake of it increases overall mortality.
So, to not over test is not just cheaper but better for people's health.
Balgair 20 hours ago [-]
Yeah I see that there can be a false positive/negative issue too.
For instance, allergy tests have a false positive rate of ~10% and a false negative rate of ~48%. So you really need a MD (or AI) to help tease things out there.
But I'll push back here a bit. Taking random tests will of course put you at the mercy of statistics. I think this is where AI will actually really help. The tests it'll have you take are not random any more than a MD's tests are (okay maybe a tad more?). Instead the AI's testing strategy will be more broad than an MD's will. Combine the experience and physical presence of the MD and the deep 'knowledge' of the AI and I think that centaur is a lot more potent.
jfalcon 18 hours ago [-]
> I can see this kind of survival-bias stories distorting the reality.
That was my take with the entire report which I think lends to an inherent bias within the data and stories. You have the entrepreneurial stories, then you have the ones where people are both impacted and receiving benefits.
The infographics and charts even call out how countries that are "first-world" with fewer safety nets are more likely to be in "survival" mode compared to countries with them.
The bit from George Carlin standup routine regarding how the poor are there just to scare the hell out of the middle class rings true in this reflection. Poorer countries accept their current realities and the feedback reflects the hustle. Richer countries with safety nets reflect the existential issues with previous industrial revolutions. Richer countries without safety nets reflect the fear that their efforts will be made "replaceable" by AI.
As for the rest - massive testing creating false positives - that is an issue of implementation and the errors introduced by humans, not data itself. If the process were in large part made more automated, it could screen for a larger panel of issues for less cost.
From my experience working deep in data and human factors - the issue in quantifying the root cause isn't reality, we live a shared experience in general. The issue is the data isn't good enough. What bugs us about it is the psychology that our perceptions are different enough to the degree that we will fight to prove an unknown.
array_key_first 8 hours ago [-]
It's important to note that doctors are also humans, and humans are squishy in every sense of the word. Their brain is squishy, it takes a ton of information and distills it down to decisions that we don't understand how we arrived at.
The fact I'm young-ish and healthy looking, with good skin and hair, leads many doctors to outright dismiss me. Never mind my history of cancer and the undeniable fact that I am obviously not healthy. But I can also use the squishyness to my advantage. I talk confidently, I push back, and that works. It sort of short-circuits a lot of doctor's brains.
beeflet 24 hours ago [-]
I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
They're good at acting as a "reverse dictionary" like this where you give it a description of something, and it knows the word for it. They have approximate knowlege of many things.
atiedebee 19 hours ago [-]
> I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
And it will do so confidently and incorrectly. A single description of symptoms from a patient is very unlikely to be enough. This is why doctors are there to ask follow-up questions and do examinations. Symptoms alone can describe a dozen different illnesses.
heavyset_go 22 hours ago [-]
I searched for "love", and it's depressing.
> "It’s not healthy to love someone or something that can’t tell you no." - Not Currently Working, United States of America
> "Instead of AI doing my chores, AI does the stuff that I love—in two minutes, without any passion." - Student, United Kingdom
> "I used to write songs for my kids. Now I have [AI music product] make them for me. I used to write poems for those I loved... I used to bust my brain doing research, and now I get a research summary that is better... but I didn’t learn the paths in between. And yet, I use it because I have to pay off my house, pay off my land, and feed my little kids so I can find an hour on Saturdays to do something meaningful with them." - Software Engineer, United States of America
> "I believe AI is likely to kill me and everyone I love… building an AI that’s smarter than us before we’ve figured out how to keep it under control will likely destroy everyone and everything they value." - Software Engineer, United Arab Emirates
This was one of the highlighted quotes:
> "I’ve been told I’m ‘too much, treatment resistant, complex’ by providers. Within six months of working alongside AI, I was able to understand my own inner world in a way I never could before. I was doing creative writing again after quitting for two years. I developed hope again — that’s the through line." - Healthcare Worker, United States of America
A healthcare worker outsourcing their own treatment to an LLM, who won't tell them no, is terrifying.
salamanteri 1 days ago [-]
> "I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
There's always something about claims like this. I'm not claiming that AI can't speed up your processes, but I question the persons expertise when they claim months or years of work turns into days or weeks. It just doesn't make sense to me.
"My output is like 25x what it used to be. I’ve built over 20 backend server tools, 7 major projects in the last 6 months—my work output this year is greater than the last five combined. I can typically finish a significant project in a day or two."
Lerc 21 hours ago [-]
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
I am not sure if this would be true given how AI's have refused to kill processes.
If AI is programmed to always serve its makers as some are arguing, then it would certainly become true.
dmacvicar 1 days ago [-]
"AI is sort of like money... it just makes you more of what you already are."
darkwater 1 days ago [-]
Oh, this is really good. Even just for the money part. Thanks!
cbg0 1 days ago [-]
I love how many of these comments have em dashes in them and how many are just outright trolling.
Gormo 21 hours ago [-]
Em dashes are not a valid indicator of LLM output.
cbg0 20 hours ago [-]
Not the only indicator, for sure.
preommr 23 hours ago [-]
> "If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
For the record, Petrov made this decision based on a false assumption that the US wouldn't launch just a few missiles, but would instead send a lot, all at once. Except, that one of the US plans was to send a few missiles to destroy critical targets, and then follow it up with a large scale attack.
Petrov himself said that he might've acted differently if he was aware of this possibility. And even then, his initial hestitancy was basically a 50/50 gamble.
An AI would basically do the same thing if asked - just roll a random number, and launch nukes below a threshold, adjust threshold based on some llm evaluation of the situation if needed.
Vibe-coded websites are the new Frontpage website, being 10x as heavy as one made by hand would be. But 10x as heavy… on top of a modern Web that had already bloated to 100x what was reasonable. Now we wish the only problem were that the html is 10x as large and complex as it needed to be.
The coming years will see the current RAM shortage followed by a war between local AI models and vibe-coded shitware “productivity” software for memory on our devices. Especially fun will be when vibe coding crap hits corporate security software, which is already often so bad it looks more like sabotage than security. Imagine when it gets, from both angles (using models for threat detection; vibe-coded shitware) another large multiplier on its resource use.
ZeWaka 1 days ago [-]
I could hear my computer fans spin up and down the second I opened and closed it. Wow.
guitarlimeo 1 days ago [-]
Came here to say the same thing. The company with the best coding model can't code an optimized infographic?
lancebeet 1 days ago [-]
This has to be intentional, right? To reassure people that front-end developers still have a job? The data is interesting but the site itself is a complete embarrassment for several reasons.
1 days ago [-]
celurian92 1 days ago [-]
I work sometimes in frontend and mostly in backend but I cant still comprehend why are we going backwards. shouldnt the websites be so optimized that they should be able to run in normal pc / smartphone rather than s23 is failing to load it. I guess at least bigger companies have that kind of resource for optimization but still not doing it why?
duskdozer 24 hours ago [-]
Any hardware gains and more are used up by stuffing in additional telemetry, ad/engagement scripts, and animations. Devs have grown up on "unused RAM is wasted RAM," work on the latest high-spec Macs, and get incentivized by higher-ups demanding things be ever "modernized" and not to waste time on optimization, which they see as annoying nerd stuff. But even that doesn't explain everything I guess, because I still see a lot of these things in open source projects.
heavyset_go 22 hours ago [-]
Google is pushing ads on Android that are literally 3D games, as in the ad is a game and the game is controllable via the ad itself.
You have to play it for a minute before it lets you dismiss the ad and continue doing whatever you were doing.
duskdozer 21 hours ago [-]
Thanks, I'm angry just reading about it.
heavyset_go 5 hours ago [-]
Black Mirror didn't go far enough
applfanboysbgon 22 hours ago [-]
The explanation for bloated OSS is that the software development field has opened up to be accessible to non-programmers. There are at least 10x as many developers publishing software now as there were in the 90s, and the class of people who know how a CPU works are a tiny, tiny minority of the field now, where 30 years ago it was the norm. The vast majority of developers operate on 15 layers of abstractions and are literally offended by the idea that they should understand even a single layer below the one they're currently on. They will invoke a retort like "might as well learn assembly while you're at it", which I have heard literally dozens of times by now, as though it is actually unreasonable to have an understanding of assembly even if you don't write it every day.
Game development suffers greatly from this, too. So many games run like dogshit and some take literally 100+ GB more disk space than they need to (with the counterfactual proven when a dev eventually "optimizes" their game 3 years later by doing some really trivial thing, like what hapened with Helldivers 2 and some other game I can't recall). There is a whole generation of "Unity devs" and "Unreal devs" who work no-code or as close to it as possible, only being able to develop games through a GUI and light scripting, with even the latter usually involving copy-pasting existing scripts written by other people and tweaking the numbers.
In some ways this is a good thing, of course. There are a lot of useful software and fun games in the world that would not have been created if software development were not accessible. But with the cost to performance and security breaches becoming the absolute norm, I do really wish there was a culture for developers to continue improving, to continue learning, instead of a culture of learning the very top of the stack, declaring it good enough, and becoming a "React dev" for the rest of their career instead of becoming "a programmer" who can use more than one abstraction.
wiseowise 21 hours ago [-]
Who pissed in your Java this morning, gramps? Performance has nothing to do with whether you’re “the programmer” (whatever that means, I assume that’s what you consider yourself among this sea of mediocrity around you) and “React dev”. It’s all about incentives, and truth is that performance isn’t very high priority for majority of software.
applfanboysbgon 20 hours ago [-]
There are very, very strong incentives for performance. Google and other hyperscalers have done studies on their data at scale (and boy do they have a lot of data), and even delays measured in low hundreds of milliseconds harm user retention. On the backend side, 1% improvements in performance can translate to millions of dollars in reduced costs at scale annually. There simply are not enough qualified programmers in the world creating performant software.
With open source it's not even about incentives. I still put effort into the software I make on my own time because I create the kind of software I want to see in the world, ie. software that doesn't feel miserable to use. It's simply about culture. People build up assembly and lower-level abstractions in general to be the scary monster in their closet, and not something they could actually learn if they just tried.
wiseowise 17 hours ago [-]
> There are very, very strong incentives for performance. Google and other hyperscalers have done studies on their data at scale (and boy do they have a lot of data), and even delays measured in low hundreds of milliseconds harm user retention. On the backend side, 1% improvements in performance can translate to millions of dollars in reduced costs at scale annually.
for Google and other hyperscalers, not for mom and pop shops and electron apps.
> There simply are not enough qualified programmers in the world creating performant software.
Nonsense. You seriously think there’s some arcane knowledge in optimizing things? Sure, if you’re pushing microseconds and optimizing network stack just to squeeze last drops out of it. But majority of software runs stupid quadratic loops, overuses map/filter/reduce, instantiates too much and is bloated with useless features. It takes one capable programmer to optimize this mess to roughly 90-98% of what’s possible. It takes world class to squeeze last 2%, but majority of software doesn’t need or care about it.
applfanboysbgon 16 hours ago [-]
No, I don't think there's particularly arcane knowledge in optimizing things! That's rather my point. It's not even hard to learn, but the current developer culture is one that treats learning anything outside of their framework as a bogeyman. There are real game developers, with jobs, who are paid many tens of thousands of dollars, who do not even know what an "int" is, because it's all been abstracted away for them and they think that understanding why their game runs like shit is something only Carmack himself could handle. In reality, we could easily produce enough capable programmers to create performant software, we simply choose not to as a culture.
wiseowise 15 hours ago [-]
> but the current developer culture is one that treats learning anything outside of their framework as a bogeyman.
You can neither prove or disprove this statement. Just my 10c: I’m working in payments and not a stranger to optimization of both native and managed code. I can easily improve our POI performance by at least 20-30% across different metrics in a span of couple of weeks. Why don’t I do that? Because not only management wouldn’t praise me, but they would actively work against me because it’s not a priority.
applfanboysbgon 15 hours ago [-]
> You can neither prove or disprove this statement.
It's self-evident from interacting with a wide range of developers, but I suppose I can prove it no more than I can prove to you the sky is blue. I'm not saying there aren't cases like yours of "I could optimize it, but I'm not being paid to". But there are also many, many cases of "Are you crazy? I would have to spend my entire life learning about CPUs, compilers, assembly, and programming languages! Get real, nobody can do that unless they're a 1% genius" for things that they could absolutely learn to do if they just tried instead of living in fear of it.
sky2224 1 days ago [-]
No kidding, it took my CPU usage from 1% to 55% instantly sheesh
yrds96 1 days ago [-]
Thanks. My galaxy s23 can't handle this website
Lionga 23 hours ago [-]
a cpu about 1000x as strong as need to beat any human at chess, can't show a simple vibe coded website...
dirkc 22 hours ago [-]
It's probably vibe coded, but also, it's next.js
sixtyj 1 days ago [-]
I was waiting “this page has problems to load” on my iPhone :)
orphea 19 hours ago [-]
I wasn't going to click the website - I agree with the first comment that first-party "researches" is just marketing which I have zero interest in.
Then your comment made me curious, and I clicked. What the actual fuck.
yrds96 1 days ago [-]
For me it's so unrelevant reading about how a product is useful on the company itself website. This is at most marketing disguised as research.
Frieren 1 days ago [-]
Billionaire CEOs have silenced the informed sources of information. We live in a time that everybody knows the opinion of billionaires in every aspect of society (and it is bad) but science and journalism are seen with mistrust.
Marketing and entertainment are supplanting news and knowledge. I hope that the people that is pushing back succeed.
Lerc 21 hours ago [-]
But how do you know if that is really their opinions?
neonstatic 1 days ago [-]
After reading some of the stories - just more of the "this is better than cancer cure, but also so dangerous we might all die" propaganda.
profsummergig 1 days ago [-]
If I had asked people what they wanted, they would have said "a faster horse" -- Henry Ford.
gfody 1 days ago [-]
"to generate copious amounts of source code that looks like it came from an offshore chop shop that whip cracked a thousand underpaid programmers to complete tasks under threat of violence so they'll fake the tests and cut corners but hide it with plausible bullshit"
HoldOnAMinute 1 days ago [-]
If the source code looks like crap, THROW IT AWAY, work on your requirements document, and re-implement.
Lio 22 hours ago [-]
Yes, all we need are a perfect set of requirements for a thing we don't fully understand yet.
So back to waterfall again then. :P
knollimar 20 hours ago [-]
If the middle steps of waterfall are low enough cosf, does it make sense?
wiseowise 20 hours ago [-]
As if it’s only one or another. Or you truly believe horde of low quality devs without any specs can come up with better product than Claude with quality specification?
lmf4lol 1 days ago [-]
what an outlook...
themafia 1 days ago [-]
In the abstract consumer point of view a car is exactly a faster horse. They both have high up front costs, both require continuous maintenance and fuel, and they're inconvenient to store when you're not using them.
Stationary gasoline engines were already changing the farm and reducing the head of horses necessary to feed a nation. It, too, was a faster horse for them.
Anyways.. it took the Detroit police to eventually deploy the first automatic stoplight. The real innovations seem to be often found downstream of the simple increases in capacity.
That all being said, it seems to me the current crop of LLMs haven't done this, their power and training budgets do not seem to be scaling favorably against adoption rates and profit margins. Absent a significant change in algorithm or computing substrate I don't think this strategy is the leap everyone hopes it will be.
mudkipdev 1 days ago [-]
This page without exaggeration reduced my browser to 5 frames per second.
cryptoegorophy 1 days ago [-]
I guess it was vibe coded with Claude
1 days ago [-]
polotics 24 hours ago [-]
Just in case:
"The doctors were just doing a copy-paste of a copy-paste of a prescription from a few weeks ago, not realizing it was the medication that was killing her. AI helped me ask the right question to save her life."
mojuba 1 days ago [-]
Good quote:
> AI should learn to say two things: ‘I don’t know’ and ‘you’re wrong.’
My guess is, the next evolutionary step of LLM's should be yet another layer on top of reasoning, which should be some form of self-awareness and theory of mind. The reasoning layer already has some glimpses of these things ("The user wants ...") but apparently not enough to suppress generation and say "I don't know".
anilgulecha 20 hours ago [-]
Claude models have made very good progress (see BS benchmark), and that probably explains why they're leading now. others will follow this precedent shortly, no doubt.
Well they managed the "you're wrong" bit at least. Sometimes ChatGPT tells me I'm wrong when I'm not. Still can't do "I don't know" which is probably the bigger problem.
sriram_malhar 1 days ago [-]
Reminds me of Abraham Wald's survivorship bias. What of the millions of others who like me who want to live in world without AI?
lumost 1 days ago [-]
Anecdotally, the concern I hear from many is that the current positioning of AI as labor replacement doesn't benefit them at all. An expensive AI which simply takes your job or forces you to work harder is categorically worse for people's quality of life.
What consumer benefits is ai driving? at least with industrial automation consumers benefited from new technologies, cheaper goods, and new job categories.
epicureanideal 1 days ago [-]
In case someone at Anthropic reads this.. if you find some way to make software developer salaries go up as a result of using your tools, or find some way to fast forward society to that stage of the effect of AI, you’ll have a lot of fans, and even faster adoption.
It would be great if there was some internal “make this benefit Main Street and knowledge workers” department, helping find ways for workers or creators to capture the value of some of the increased productivity.
heavyset_go 1 days ago [-]
> It would be great if there was some internal “make this benefit Main Street and knowledge workers” department, helping find ways for workers or creators to capture the value of some of the increased productivity.
If they wanted to do this, they could put their models in a public trust for the public's access and benefit in research, education, etc. Then it could be licensed, pay a dividend like a sovereign wealth fund, etc.
Considering that they copy and train on the sum total of all human creativity, a public trust is something that would be in line with both the spirit, and first and fourth considerations, of fair use doctrine:
1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
2. the nature of the copyrighted work;
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
4. the effect of the use upon the potential market for or value of the copyrighted work.
That way everyone is rewarded with the benefits of running a model that was trained on everyone's creations.
HoldOnAMinute 1 days ago [-]
I don't need software developer salaries to go up. That would be kind of selfish and narrow minded.
What I need instead is something that takes the burden off my entire society and gives them a breather. Universal health care to start. They could also use a higher minimum wage, and lower housing costs.
redrove 1 days ago [-]
>Universal health care to start
That already exists in any other country but the USA. Aim higher.
Cytobit 23 hours ago [-]
Multiversal healthcare?
AlienRobot 21 hours ago [-]
You have my vote.
brigandish 1 days ago [-]
Is it more selfish and narrow minded to wish for a "utopia" that is economically unsound and happens to be your personal preference, or to wish for productive workers' salaries to increase - something with an actual track record of improving any society it occurs in?
All perl programmers should be wishing for ponies, that's definitely less narrow minded.
doesnt_know 1 days ago [-]
What part of universal health care, higher minimum wage and lower housing costs sounds like "utopia" to you?
That's just the system we have, but slightly better and completely achievable.
brigandish 1 days ago [-]
It doesn't sound like utopia to me, hence the quotation marks. Eminently achievable, but not actually good. Only those engaged in utopian thinking - with a heavy slice of ignorance of basic economics and history - would think it is utopia or leads to it.
mjamesaustin 1 days ago [-]
Universal healthcare is very sound economically. Costs are lower and outcomes better than under private insurance, and overhead is dramatically reduced.
brigandish 1 days ago [-]
This is not true, the Kings Fund publishes a report that the Guardian fauns over whenever it comes out because it shows how "cost effective" the NHS is, yet if you read it you find that actual health outcomes are generally worse than other, insurance based systems. Give me wealth and health over a postcode lottery produced by utopianists.
UncleMeat 20 hours ago [-]
The economically sound thing is to accrue more power to those with wealth. The owners will have access to a machine that turns money into money without a cent flowing outside of the owner class. That'll improve society /s.
ehnto 1 days ago [-]
I am afraid that will be up to individuals, the business you work for likely hasn't got much incentive to let you capture the new value.
You'll either need to freelance, or start a company (or maybe a co-op) to capture the new value created by your ability to leverage AI.
It won't be much different to when a company buys more CNC machines and the employees don't get any more money despite producing way more parts.
qsera 1 days ago [-]
>if you find some way to make software developer salaries go up
This is quite easy. Just optimize the models to do reviews and bug finding. This would make developers (who normally hate reviews) quite happy and let them do more coding, thus delivering more value and possibly earning more...
mdavid626 1 days ago [-]
Sigh… that’s not how it works.
qsera 1 days ago [-]
How what works?
alex43578 1 days ago [-]
Is that feasible? The coding tools already unlock a ton of possibilities for people to create value, but people have to capitalize on it.
I have no clue what this would look like other than maybe an investment fund for people creating apps/businesses based on Claude tools.
ip26 1 days ago [-]
It’s often lamented that some employees have a difficult case to argue for their impact on the bottom line, and as a result probably get paid a lower fraction of their value to the business than other roles where the link is easy to measure.
I can at least “imagine” a model that tries to crack this nut.
alex43578 1 days ago [-]
But your value to a company doesn’t just come from your impact, but how tough you are to replace, how much others value your skills, etc.
Nike’s logo designer was paid $35. One model says she should’ve gotten hundreds of thousands of dollars, because of what her work product went on to become. Another model of the value says it was worth $35 because that’s what she agreed to.
If, as an employee, you think you’re massively undervalued for the impact you generate, go out to the market and either get another job or start your own business making widgets - either you’ll get that pay bump you expect, or you’ll see you actually were relying on a lot of other supporting mechanisms to generate that value.
weird-eye-issue 1 days ago [-]
Lol they don't have control over the free market. But it absolutely does make the top 10% of developers much more valuable.
kindkang2024 1 days ago [-]
[dead]
palmotea 1 days ago [-]
> What consumer benefits is ai driving?
The intrinsic satisfaction of increasing the wealth of shareholders. We should all be happy to devote ourselves to getting them more, nothing is more important than that.
neonstatic 23 hours ago [-]
Of all the possible criticisms that's the one you chose? If that's the worst of the problems you can see, why don't you buy some stock and became the shareholder. Per your own words, you will get more.
palmotea 19 hours ago [-]
> Of all the possible criticisms that's the one you chose? If that's the worst of the problems you can see
The point is there is little benefit to these technologies to the consumer, especially in relation to likely harm in other areas (you lost your customer service job, but AI overview will answer your trivia question with slightly less effort). Note: little does not mean none.
So the farce is they benefit by religiously worshiping capitalist shareholders.
> why don't you buy some stock and became the shareholder. Per your own words, you will get more.
LOL. Don't you get it? The kind of smallholdings of shares available to regular people won't provide the kind of returns to mitigate any of these harms. They work as a ploy to trick dumb-ass workers into identifying with capitalist tycoons (e.g. opposing pro-worker things that'd get you a dollar more an hour in wages to get a penny more a quarter in dividends, it works because most don't do the math).
neonstatic 6 hours ago [-]
Yes, yes, workers of the world unite. It worked so great the last few times it was tried. You are very smart.
palmotea 3 hours ago [-]
> Yes, yes, workers of the world unite. It worked so great the last few times it was tried. You are very smart.
No, I guess I wasn't smart enough to realize there are only two options: the present day status quo or Soviet central planning. Nothing else is possible.
Nothing.
Enjoy your pennies.
HoldOnAMinute 1 days ago [-]
>> What consumer benefits is ai driving?
My kids like to use AI to discuss things they learned in school in greater depth, and from different angles than they learned in the textbook. They can also ask "What if" and "Why not" questions from this infinitely patient teacher.
heavyset_go 1 days ago [-]
At least with search engines, or even libraries, you're aware that there are many authors of varying reliability and the publications/sites might not be reputable.
AI chat bots will summarize the top N web search results as if they're fact, weaving them into seemingly coherent narratives, all while reassuring the user that their questions are really good and they're learning a lot.
IlPeach 1 days ago [-]
Oh no
wongarsu 1 days ago [-]
Most adults are terrible at answering 'what if' and 'why' questions. An AI assistant with search will do much better than the average parent
That might not apply to the kinds of parents that hang out here though
didibus 1 days ago [-]
Also it's better not to answer, but flip the question back and let your kid think through it, offer hypothesis, and so on, helping him problem solve, recall, and all that.
archagon 1 days ago [-]
Except for the... you know... human interaction part. Arguably the most important part.
andrei_says_ 1 days ago [-]
An infinitely patient and somewhat schizophrenic teacher.
ehnto 1 days ago [-]
I guess you could argue that there should be cheaper software, but most software people interact with is free/ad supported. Where it is paid, it's already a race to to the bottom.
Basically consumers don't really pay for software in the first place, and the leverage from labour companies get through software is already through the roof even before AI. Will much change for consumers of software?
lumost 1 days ago [-]
The companies offering free software will leverage AI to extract more value from you via increased surveillance, ads, and paid preference shaping.
So... not much benefit either.
Sol- 23 hours ago [-]
> An expensive AI which simply takes your job or forces you to work harder
But this implies higher productivity, no? This must mean more outputs that should benefit someone, unless the jobs that are being automated had little value to begin with. Seems paradoxical.
throwaway27448 1 days ago [-]
This is like begging your replacement for comfort... what's the point? What words could change reality?
lumost 1 days ago [-]
There is a practical upper bound on how much labor can be replaced before deflation becomes a problem. AI firms risk spoiling the pot if no other business model is discovered.
1 days ago [-]
nimchimpsky 1 days ago [-]
[dead]
whiplash451 1 days ago [-]
The writing is on the wall, so to speak.
The number 1 ask from the interviewed cohort is « professional excellence »
It is telling about what we prioritize in our society.
I am usually an optimistic person, but I struggle to see how this does not end up with more misery and worse lifestyle all around.
dogleash 14 hours ago [-]
> It is telling about what we prioritize in our society.
No it's not.
It's a measure of what people want accomplished, are least interested in doing themselves, and feel capable of reviewing enough to delegate to a known lair.
jpadkins 21 hours ago [-]
what would be a better value for people who work? Pride in your craft, striving for excellence is not a bad trait...
fancyfredbot 1 days ago [-]
People derive genuine satisfaction from a job well done. A sense of purpose and of being useful is important to our wellbeing. There's nothing dystopian about a desire to do your work well.
pohl 22 hours ago [-]
Well, there is when you no longer deserve credit for the work and your boss, should you be fortunate enough to even have a job, just expects you to do more work. The satisfaction will evaporate pretty quickly.
mettamage 1 days ago [-]
A classic marketing piece by showing thought leadership based on survey data. I'm not saying they're lying, I don't think they are. I am saying they are biased and have a conflict of interest on this one. I've seen it at my previous employer as well (a F500 company).
To remove some of that bias, I'd recommend to get an independent body (probably some university) in and let them do the interpretation and write the article.
I just want people to see the tactic for what it is. I really like Claude Opus 4.6 but this just screams "marketing" to me. I wouldn't say it's wrong, it's good to have these discussions and I'd encourage AI companies to say what they have to say. I would say: more independent sources are needed (and not another AI company).
bibelo 1 days ago [-]
As someone working in clinical studies,
I can tell you the questions are biased from the start. That study has to be redone entirely.
shaky-carrousel 1 days ago [-]
Withholding the truth is the same as lying. Manipulating survey questions is the same as lying.
possiblydrunk 22 hours ago [-]
Nitpicky comment. The article says
> "We call this the “light and shade” of AI: the same capabilities that lead to > benefits also produce harms. The two sides are entangled."
Why not call it a "double-edged sword" or something else? Light and shade are opposites but not necessarily two products from the same tool. It just irks me.
24 hours ago [-]
13 hours ago [-]
skyberrys 1 days ago [-]
I am disappointed in how vague the classifications are for what people want. 'professional excellence ' anyone? I was expecting more concrete responses, but I guess since it's working with what we told it, generalities are prevalent in a write up. If I keep looking, perhaps at the quotes, I might find more concrete answers.
And just keep scrolling, you can make it to the story eventually.
crummy 1 days ago [-]
Yeah I want to know how many people are using AI for social purposes; to provide the role of a friend. But I don’t know what category that would be under.
1 days ago [-]
erinlynn 22 hours ago [-]
I just launched a site yesterday that's trying to record anonymous stories like this and see how things breakdown across demographics. Fantastic timing on my part hahaha. Anthropic obviously reaches more people.
The quotes they have are really interesting to read. That's what I was hoping to get when I built mine.
azangru 23 hours ago [-]
7.24 seconds until html finished loading (could be due to a HN hug, but still)
4.0 MB transferred
vrinimi 1 days ago [-]
Cool to find my own quote among those they've decided to showcase.
jdefr89 20 hours ago [-]
Which one was yours???
sudo_cowsay 1 days ago [-]
I don't like describing countries like this but: a bit underdeveloped countries (compared to North American and European countries) seem to have a more positive view on AI.
chenglin97 1 days ago [-]
Why do websites need to be so front end heavy? When a software company spend so much effort on fancy website, I don’t trust their product. Except anthropic i guess.
pmulard 1 days ago [-]
Consistent users of ~~product~~ AI find it favorable. Color me shocked.
I'm much more curious about the results of 80k people who don't use AI regularly.
menaerus 1 days ago [-]
They do not find it favorable all of the time. If you look into the "What people are concerned about" section, these same people will call out the "Unreliability" as a top-1 concern. So, you can be excited and critical of the technology at the same time. To me this is a more worthy indicator than people who are on either of the extremes, highly critical of the tech or not critical at all.
ThouYS 1 days ago [-]
Maybe most interesting about the piece is, that we'll likely see more large scale interviews like this (even if this one is a bit bland)
esperent 1 days ago [-]
Save you a click, way, way down the page you'll find that it's all generic, whitewashed niceties like:
01. Professional excellence
18.8%
02. Personal transformation
13.7%
03. Life management
13.5%
04. Time freedom
11.1%
05. Financial independence
9.7%
06. Societal transformation
9.4%
07. Entrepreneurship
8.7%
08. Learning & growth
8.4%
09. Creative expression
5.6%
I find this highly suspicious. I'm sure there would be at least 10% who respond "I want it to go away".
tcit 1 days ago [-]
That's explained in the article.
> These are active Claude users who'd already found enough value to keep using AI, and our interview asked first for positive visions for AI and then for concerns that would counter their vision.
____tom____ 1 days ago [-]
Boy is that a terrible website. I tried to find a story and give up.
suzzer99 1 days ago [-]
And that's why I always come to the comments before deciding if the article is worth checking out. Thank you for your service.
erwinmatijsen 1 days ago [-]
To be fair, there is a button right at the beginning saying “Jump to story”. It’s not the most obvious, I agree, but it is there.
MikeTheGreat 1 days ago [-]
That's hilarious.
It's like those recipe sites that have 5 pages of nice photos and background story and side tracks and whatnot as the author waxes verbose, so they need to put a 'Jump to recipe' button in so people don't just click 'Back' immediately.
Except this time for an article.
I can't tell if 'skip the junk' is good (junk can be skipped!) or bad (maybe this means there's too much junk on the page?)
1 days ago [-]
godblessamerica 18 hours ago [-]
People want to be transformed after all
lofaszvanitt 10 hours ago [-]
This was a painful read. Like I was listening to a TED talk about why you need to let this gargantuan killing machine into your life, it will be good you see. The exterminated neighbours were just a glitch, pinky promise it will not eat you :D.
chenglin97 1 days ago [-]
Why do websites need to be so front end heavy? When a software company spend so much effort on fancy website, I don’t trust their product
tropeypeople 1 days ago [-]
Em dashes in the user quotes, uh?
Gormo 21 hours ago [-]
Well, em dashes are a standard punctuation mark used in English, and most rich-text input controls will automatically convert a double hyphen into a single em dash when typed.
The meme that's been going around that em dashes are an indicator of LLM output is completely invalid, and people who repeat it are really just outing themselves as the sort of people who don't actually read very much.
13 hours ago [-]
fancyfredbot 1 days ago [-]
Intrigued to see a blatant grammatical error ("took that logic farther" should be "took that logic further").
Is this incompetence or a deliberate error to indicate human authorship?
If the former then why aren't they using at least an AI to proof read? If the latter then what do anthropic think is wrong with AI written text?
shevy-java 22 hours ago [-]
Where is the option to pick on "to go away"?
SpicyLemonZest 1 days ago [-]
> “It’s much easier for me to learn without being judged—just friendly feedback. It's harder with friends or family to get that.”
White collar worker, Brazil
I'm not going to claim I know this response was written by an AI, but it's very suspicious. I would like to hear about how Anthropic ensured that the survey responses were provided by real human beings using their own words.
shaky-carrousel 1 days ago [-]
Maybe they interviewed a bunch of clawd bots with a touching soul.md
seriousmice 1 days ago [-]
I mean, I don't know.. those quotes seem way too clean from what I'd expect of normal people chatting. Also the use of em-dash. Does it say somewhere that it's an LLM that has compressed the sentiments of the conversations to create these quotes? I wouldn't be surprised if it was.
genthree 1 days ago [-]
Even pre-AI, it was common to “massage” genuine quotes like this so they read better.
Hell, anyone who’s been interviewed by a journalist can tell you they do it, too, sometimes to the point of changing the substance of what you said.
1 days ago [-]
pojzon 10 hours ago [-]
Hope? For now it brings fear to millions of ppl if not billions. “What happens to my job and kids?”
Who writes this unrealistic gibberish?
kindkang2024 1 days ago [-]
[dead]
doublediamond21 24 hours ago [-]
[dead]
myxdsantos84 12 hours ago [-]
[dead]
justinbarely886 20 hours ago [-]
[dead]
AliEveryHour16 21 hours ago [-]
[dead]
goatyishere59 1 days ago [-]
[dead]
pissedoffadmin 1 days ago [-]
[dead]
verisimi 1 days ago [-]
> 80,508 people
Not 81,000 as it says in the title. I know I'm being nitpicky, but I wouldn't round up to 81k. Surely the 'important number' in this case is 80, so you would round down to that. Then let the reader pleasantly discover you had interviewed ~500 more than you stated.
It's funny to me when someone does this sort of minor hyperbole that's verging on lying - you have to wonder what is going on.
cpburns2009 20 hours ago [-]
80,000, 81,000, 80,500, and 80,510 are all valid ways to round 80,508 depending on the number of significant digits you want to preserve. For a number in the tens of thousands, it's natural to round to the nearest thousand which is appropriately 81k.
verisimi 16 hours ago [-]
I get it, I know how rounding works. I mean, its basically 100k - why not just say that? That would be justifiable too, right?
What would you have done, in those circumstances though? Would you 'round up', overstating your case, like Anthropic did? Or would you ensure that you avoid the suggestion that you were misrepresenting the numbers?
cpburns2009 12 hours ago [-]
I'd round it to 81k whether the number is good or bad. It's less than a 1% difference. What I wouldn't do is round it to 100k because that's a 24% increase, unless the number was really inconsequential. Say you have a website with 10m requests over a month, 9.9m were successful, and "100k" were failed requests for nonsense pages.
verisimi 2 hours ago [-]
Thanks for your thoughts. You would round up 99k to 100k (fair, imo) but don't round down 80.5k to 80k? I guess this is just one of those things...
randusername 20 hours ago [-]
> Claude put the historical pieces together, leading to my proper diagnosis after being misdiagnosed for over 9 years.
> I'd started telling Claude about things I couldn't even tell my partner. It felt like I was having an emotional affair.
> I worked with an AI to prepare educational materials for my eldest child—asking the AI to work as both tutor and curriculum expert. We received [my child’s] report yesterday, he was graded as either ‘Above’ or ‘Well Above’ standard in every academic area he studies.
So many concerning quotes that read like AI is a workaround for things that are not working right at scale in society.
I worry (1) AI workarounds will make it clear society can tolerate even more suck then (2) society will get worse to where AI is required to cope then (3) AI will stop being subsidized and the poor will get wrecked.
IshKebab 15 hours ago [-]
Well you could say that about any advancement.
"Electric lighting means I have time to read books." - omg society made people work during daylight! (This is a real issue in Africa btw, or at least was not very long ago.)
-------------
Who is doing the research matters. What is presented here is not the product of academia. It's the product of a company that produces AI agents. The picture this web page paints may appear rosy and have just enough thorns to be convincing, but it's the equivalent of a tobacco company telling you that their product is neither addictive or carcinogenic.
I fully expect actual research will be done on the impact of AI and our hopes for it. This page, however, is marketing.
[1] https://news.ycombinator.com/item?id=47178371 [2] https://github.com/mickael-kerjean/filestash
However, since we're frank here, I'd say I'll download the most recent release and be very careful about upgrading because I don't put much trust in projects co-created with LLMs. I know there is a full spectrum but I've seen enough and I don't have the resources to check where on the spectrum your project ends up. LLMs are a powerful drug and terribly hard to stop once you start.
Its not, there is history since 2017 and I've been working on this full time
When a company lies for something that trivial, it does not inspire trust
Also AI written, but I suppose that's expected. The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
that's essentially it. But not only that, we learned to distinguish things written by humans for humans, and things written by humans (paid by the word) for SEO. LLMs tend to produce text that would be great for SEO, so it stands out as not for humans
>Words to watch: stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted, ...
Once I read this, it started sticking out to me all the time.
[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
https://claude.ai/share/8fa5de6a-e79d-414c-834f-9bc9aa87c9bc
Investors want to see you use your own product, if they themselves don't feel the product is good enough to write their own announcement then investors would worry about their future.
And AI is still a product primarily aimed at investors and not consumers.
"I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle."
"Relaxing while my AI gets the work done, builds the wealth. It’s a shadow of me, just a very, very long one."
etc. I do believe AI currently accelerates businesses, especially in software dev. We work with a contractor who use Claude Code to reach incredible development pace for the size of their team, but also when we sit down with them in meetings they understand what's being created, they are able to argue their architectural choices, and they know how to propose business value.
You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then a) everyone can do it, so you won't actually have any value to propose, and b) Once the AI can run businesses without humans in the loop, you can bet your ass they will not out of the goodness of their hearts keep giving that ability away for $20.
In summary, AI if used to accelerate businesses _CAN_ be good. Buying it as a magic bullet to bring you out of poverty is probably a worse choice than just buying a lottery ticket.
That didn't last long!
Then there's people who are "well intentioned", I guess, but lack the technical knowledge. A friend of a friend with no technical background is selling websites to companies that he writes with Claude. They look shiny, everyone's happy in the short run, but I don't doubt issues will come up down the line that someone will have to be responsible for. I'd personally feel like I was ripping people off doing this, but I think also Dunning-Kruger prevents you from knowing any better if you are the type of person doing this.
Then there's the whole B2B SaaS gang that are basically just producing vaporware and telling other people how to produce more vaporware. This is no different from crypto, NFTs etc. before it really. Just people trying to hustle others.
And then there's the whole clawdbot gang probably burning more in tokens everyday than normal people use in a month so they can sort 18 e-mails.
So yeah I mean you're right, there certainly is a subset of people who are using this ethically (as ethically as you can use LLMs but that's another story) to make some money on the side. Certainly not the majority though I'd say.
The pseudo "entrepreneurs" who think they could outsmart the market by working less, are just naive. In a free market economy optimization is brutal and a freelancer developer will sell the same "product" cheaper, because he has the same technology available to him.
So the only way to get the gains from these AI technologies is to have something that can't be easily copied like market knowledge, data access or sweetheart deals with big companies that can pay more because their profits support the higher spend.
Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation. But the margins will go waaay down. 25$ for a set of forms and a database, not gonna cut it anymore.
True in the current state of LLMs, possibly not true forever if someone finds the magic bullet that turns the one-shotting (reliable) software dream that companies like Anthropic and Perplexity currently peddle into reality. Seems far-fetched ATM but the gains since GPT-2 have been very real.
We're quite a ways away from this though, even with Opus 4.6 and the like. And even further from it being part of Claude Code rather than some proprietary $1000/mo. closed-source solution.
As you say though, _if_ such a technology were to exist, it's Anthropic that holds all the cards, not random entrepreneur #25721 who is asking the Anthropic API the same thing that the actual customer could just be asking directly. At that point you're an undesirable middleman, not a business.
Fake it till you make it mentality that degenerated completely once we got the internet. It used to be "crypto will make you rich, buy my coin/course", now it's "AI will make you rich buy my tool/course", the same type of people will get fleeced
These are the people getting all the attention: https://www.youtube.com/watch?v=NwaUMBQ3Wgg
The quote about being temporarily embarrassed millionaires comes to mind….
If Average Joe were to delegate most of their life decisions to the chatbot, it'd probably turn out better, or in the worst case, more informed.
Okay how are we measuring this? We can't even quantify intelligence for humans accurately, let alone compare it to machines. Hell, we can't even really define intelligence.
I mean, humans can learn on the fly and progressively, and currently no LLMs are capable of that. Literally none of them, and no context doesn't count. So if that's the measure, then LLMs sit at a 0 along with rocks and twigs and humans closer to a 1.
Obviously that's not really the measurement, LLMs are quite good. But I don't think we can say, for sure, LLMs are a replacement for humans. They might replace some specific tasks, but humans are not a set of tasks. I'd still rather have 10 engineers than 0 engineers and 10 Claude Code licenses.
Even if true, no one is ever going to do that because of Dunning Kruger, so it's still not magically solving problems.
Louder for those at the back.
Some quotes that stuck out to me:
"I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
"I live in a war zone... AI can not only give practical advice, but also emotionally calm me down during panic attacks. It can calm someone during a missile attack in one chat, and laugh with me about something silly in another. That’s what makes it not fragmented into a therapist/teacher/friend, but something whole." Ukraine
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
"The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
> which came back 6 times higher than its supposed to be.
It has been proven that massive testing creates many false positives.
This happened during covid: https://www.bmj.com/content/373/bmj.n1411/rr
Tests may not be as reliable as though but they are good enough when other symptoms are accounted for. To randomly test people based on AI hallucinations can increase the number of unnecessary medication or even interventions.
This is a competition of public and private interests. A sick individual is going to lobby for tests until they discover the cause. From a public perspective, it might be cheaper to just let them die. AI is an advocate for the individual.
For the record, ChatGPT helped me diagnose a lifelong illness. I'm a new man now thanks to AI. Literally life changing. I had spent decades pleading for tests because no one could figure out the cause. I think a likely outcome here is not necessarily 10,000x more tests performed, but similar or even fewer tests, because the diagnosis success rate with AI is higher. It's not subject to bias. People tend to be more honest and reflective with their AI than they are with doctors. They get 5 minutes to give the entire case to the doctor. With an AI they can spend weeks debating and reflecting. This builds a case history far more detailed and accurate than anything we have in modern medicine today. Amplified by an order of magnitude because the AI can extract meaningful insights from the discussion.
In the very near future our AI will contact our GP for us. Soon after that, our GP will be our AI.
You missed the point. More tests can be detrimental to the patient's health as increase the risk of unneeded medication or surgery. Also many test like x-rays have their own risks. To do them for the sake of it increases overall mortality.
So, to not over test is not just cheaper but better for people's health.
For instance, allergy tests have a false positive rate of ~10% and a false negative rate of ~48%. So you really need a MD (or AI) to help tease things out there.
But I'll push back here a bit. Taking random tests will of course put you at the mercy of statistics. I think this is where AI will actually really help. The tests it'll have you take are not random any more than a MD's tests are (okay maybe a tad more?). Instead the AI's testing strategy will be more broad than an MD's will. Combine the experience and physical presence of the MD and the deep 'knowledge' of the AI and I think that centaur is a lot more potent.
That was my take with the entire report which I think lends to an inherent bias within the data and stories. You have the entrepreneurial stories, then you have the ones where people are both impacted and receiving benefits.
The infographics and charts even call out how countries that are "first-world" with fewer safety nets are more likely to be in "survival" mode compared to countries with them.
The bit from George Carlin standup routine regarding how the poor are there just to scare the hell out of the middle class rings true in this reflection. Poorer countries accept their current realities and the feedback reflects the hustle. Richer countries with safety nets reflect the existential issues with previous industrial revolutions. Richer countries without safety nets reflect the fear that their efforts will be made "replaceable" by AI.
As for the rest - massive testing creating false positives - that is an issue of implementation and the errors introduced by humans, not data itself. If the process were in large part made more automated, it could screen for a larger panel of issues for less cost.
From my experience working deep in data and human factors - the issue in quantifying the root cause isn't reality, we live a shared experience in general. The issue is the data isn't good enough. What bugs us about it is the psychology that our perceptions are different enough to the degree that we will fight to prove an unknown.
The fact I'm young-ish and healthy looking, with good skin and hair, leads many doctors to outright dismiss me. Never mind my history of cancer and the undeniable fact that I am obviously not healthy. But I can also use the squishyness to my advantage. I talk confidently, I push back, and that works. It sort of short-circuits a lot of doctor's brains.
They're good at acting as a "reverse dictionary" like this where you give it a description of something, and it knows the word for it. They have approximate knowlege of many things.
And it will do so confidently and incorrectly. A single description of symptoms from a patient is very unlikely to be enough. This is why doctors are there to ask follow-up questions and do examinations. Symptoms alone can describe a dozen different illnesses.
> "It’s not healthy to love someone or something that can’t tell you no." - Not Currently Working, United States of America
> "Instead of AI doing my chores, AI does the stuff that I love—in two minutes, without any passion." - Student, United Kingdom
> "I used to write songs for my kids. Now I have [AI music product] make them for me. I used to write poems for those I loved... I used to bust my brain doing research, and now I get a research summary that is better... but I didn’t learn the paths in between. And yet, I use it because I have to pay off my house, pay off my land, and feed my little kids so I can find an hour on Saturdays to do something meaningful with them." - Software Engineer, United States of America
> "I believe AI is likely to kill me and everyone I love… building an AI that’s smarter than us before we’ve figured out how to keep it under control will likely destroy everyone and everything they value." - Software Engineer, United Arab Emirates
This was one of the highlighted quotes:
> "I’ve been told I’m ‘too much, treatment resistant, complex’ by providers. Within six months of working alongside AI, I was able to understand my own inner world in a way I never could before. I was doing creative writing again after quitting for two years. I developed hope again — that’s the through line." - Healthcare Worker, United States of America
A healthcare worker outsourcing their own treatment to an LLM, who won't tell them no, is terrifying.
There's always something about claims like this. I'm not claiming that AI can't speed up your processes, but I question the persons expertise when they claim months or years of work turns into days or weeks. It just doesn't make sense to me.
"My output is like 25x what it used to be. I’ve built over 20 backend server tools, 7 major projects in the last 6 months—my work output this year is greater than the last five combined. I can typically finish a significant project in a day or two."
I am not sure if this would be true given how AI's have refused to kill processes.
If AI is programmed to always serve its makers as some are arguing, then it would certainly become true.
For the record, Petrov made this decision based on a false assumption that the US wouldn't launch just a few missiles, but would instead send a lot, all at once. Except, that one of the US plans was to send a few missiles to destroy critical targets, and then follow it up with a large scale attack.
Petrov himself said that he might've acted differently if he was aware of this possibility. And even then, his initial hestitancy was basically a 50/50 gamble.
An AI would basically do the same thing if asked - just roll a random number, and launch nukes below a threshold, adjust threshold based on some llm evaluation of the situation if needed.
The coming years will see the current RAM shortage followed by a war between local AI models and vibe-coded shitware “productivity” software for memory on our devices. Especially fun will be when vibe coding crap hits corporate security software, which is already often so bad it looks more like sabotage than security. Imagine when it gets, from both angles (using models for threat detection; vibe-coded shitware) another large multiplier on its resource use.
You have to play it for a minute before it lets you dismiss the ad and continue doing whatever you were doing.
Game development suffers greatly from this, too. So many games run like dogshit and some take literally 100+ GB more disk space than they need to (with the counterfactual proven when a dev eventually "optimizes" their game 3 years later by doing some really trivial thing, like what hapened with Helldivers 2 and some other game I can't recall). There is a whole generation of "Unity devs" and "Unreal devs" who work no-code or as close to it as possible, only being able to develop games through a GUI and light scripting, with even the latter usually involving copy-pasting existing scripts written by other people and tweaking the numbers.
In some ways this is a good thing, of course. There are a lot of useful software and fun games in the world that would not have been created if software development were not accessible. But with the cost to performance and security breaches becoming the absolute norm, I do really wish there was a culture for developers to continue improving, to continue learning, instead of a culture of learning the very top of the stack, declaring it good enough, and becoming a "React dev" for the rest of their career instead of becoming "a programmer" who can use more than one abstraction.
With open source it's not even about incentives. I still put effort into the software I make on my own time because I create the kind of software I want to see in the world, ie. software that doesn't feel miserable to use. It's simply about culture. People build up assembly and lower-level abstractions in general to be the scary monster in their closet, and not something they could actually learn if they just tried.
for Google and other hyperscalers, not for mom and pop shops and electron apps.
> There simply are not enough qualified programmers in the world creating performant software.
Nonsense. You seriously think there’s some arcane knowledge in optimizing things? Sure, if you’re pushing microseconds and optimizing network stack just to squeeze last drops out of it. But majority of software runs stupid quadratic loops, overuses map/filter/reduce, instantiates too much and is bloated with useless features. It takes one capable programmer to optimize this mess to roughly 90-98% of what’s possible. It takes world class to squeeze last 2%, but majority of software doesn’t need or care about it.
You can neither prove or disprove this statement. Just my 10c: I’m working in payments and not a stranger to optimization of both native and managed code. I can easily improve our POI performance by at least 20-30% across different metrics in a span of couple of weeks. Why don’t I do that? Because not only management wouldn’t praise me, but they would actively work against me because it’s not a priority.
It's self-evident from interacting with a wide range of developers, but I suppose I can prove it no more than I can prove to you the sky is blue. I'm not saying there aren't cases like yours of "I could optimize it, but I'm not being paid to". But there are also many, many cases of "Are you crazy? I would have to spend my entire life learning about CPUs, compilers, assembly, and programming languages! Get real, nobody can do that unless they're a 1% genius" for things that they could absolutely learn to do if they just tried instead of living in fear of it.
Then your comment made me curious, and I clicked. What the actual fuck.
Marketing and entertainment are supplanting news and knowledge. I hope that the people that is pushing back succeed.
So back to waterfall again then. :P
Stationary gasoline engines were already changing the farm and reducing the head of horses necessary to feed a nation. It, too, was a faster horse for them.
Anyways.. it took the Detroit police to eventually deploy the first automatic stoplight. The real innovations seem to be often found downstream of the simple increases in capacity.
That all being said, it seems to me the current crop of LLMs haven't done this, their power and training budgets do not seem to be scaling favorably against adoption rates and profit margins. Absent a significant change in algorithm or computing substrate I don't think this strategy is the leap everyone hopes it will be.
"The doctors were just doing a copy-paste of a copy-paste of a prescription from a few weeks ago, not realizing it was the medication that was killing her. AI helped me ask the right question to save her life."
> AI should learn to say two things: ‘I don’t know’ and ‘you’re wrong.’
My guess is, the next evolutionary step of LLM's should be yet another layer on top of reasoning, which should be some form of self-awareness and theory of mind. The reasoning layer already has some glimpses of these things ("The user wants ...") but apparently not enough to suppress generation and say "I don't know".
https://petergpt.github.io/bullshit-benchmark/viewer/index.v...
What consumer benefits is ai driving? at least with industrial automation consumers benefited from new technologies, cheaper goods, and new job categories.
It would be great if there was some internal “make this benefit Main Street and knowledge workers” department, helping find ways for workers or creators to capture the value of some of the increased productivity.
If they wanted to do this, they could put their models in a public trust for the public's access and benefit in research, education, etc. Then it could be licensed, pay a dividend like a sovereign wealth fund, etc.
Considering that they copy and train on the sum total of all human creativity, a public trust is something that would be in line with both the spirit, and first and fourth considerations, of fair use doctrine:
That way everyone is rewarded with the benefits of running a model that was trained on everyone's creations.What I need instead is something that takes the burden off my entire society and gives them a breather. Universal health care to start. They could also use a higher minimum wage, and lower housing costs.
That already exists in any other country but the USA. Aim higher.
All perl programmers should be wishing for ponies, that's definitely less narrow minded.
That's just the system we have, but slightly better and completely achievable.
You'll either need to freelance, or start a company (or maybe a co-op) to capture the new value created by your ability to leverage AI.
It won't be much different to when a company buys more CNC machines and the employees don't get any more money despite producing way more parts.
This is quite easy. Just optimize the models to do reviews and bug finding. This would make developers (who normally hate reviews) quite happy and let them do more coding, thus delivering more value and possibly earning more...
I have no clue what this would look like other than maybe an investment fund for people creating apps/businesses based on Claude tools.
I can at least “imagine” a model that tries to crack this nut.
Nike’s logo designer was paid $35. One model says she should’ve gotten hundreds of thousands of dollars, because of what her work product went on to become. Another model of the value says it was worth $35 because that’s what she agreed to.
If, as an employee, you think you’re massively undervalued for the impact you generate, go out to the market and either get another job or start your own business making widgets - either you’ll get that pay bump you expect, or you’ll see you actually were relying on a lot of other supporting mechanisms to generate that value.
The intrinsic satisfaction of increasing the wealth of shareholders. We should all be happy to devote ourselves to getting them more, nothing is more important than that.
The point is there is little benefit to these technologies to the consumer, especially in relation to likely harm in other areas (you lost your customer service job, but AI overview will answer your trivia question with slightly less effort). Note: little does not mean none.
So the farce is they benefit by religiously worshiping capitalist shareholders.
> why don't you buy some stock and became the shareholder. Per your own words, you will get more.
LOL. Don't you get it? The kind of smallholdings of shares available to regular people won't provide the kind of returns to mitigate any of these harms. They work as a ploy to trick dumb-ass workers into identifying with capitalist tycoons (e.g. opposing pro-worker things that'd get you a dollar more an hour in wages to get a penny more a quarter in dividends, it works because most don't do the math).
No, I guess I wasn't smart enough to realize there are only two options: the present day status quo or Soviet central planning. Nothing else is possible.
Nothing.
Enjoy your pennies.
My kids like to use AI to discuss things they learned in school in greater depth, and from different angles than they learned in the textbook. They can also ask "What if" and "Why not" questions from this infinitely patient teacher.
AI chat bots will summarize the top N web search results as if they're fact, weaving them into seemingly coherent narratives, all while reassuring the user that their questions are really good and they're learning a lot.
That might not apply to the kinds of parents that hang out here though
Basically consumers don't really pay for software in the first place, and the leverage from labour companies get through software is already through the roof even before AI. Will much change for consumers of software?
So... not much benefit either.
But this implies higher productivity, no? This must mean more outputs that should benefit someone, unless the jobs that are being automated had little value to begin with. Seems paradoxical.
The number 1 ask from the interviewed cohort is « professional excellence »
It is telling about what we prioritize in our society.
I am usually an optimistic person, but I struggle to see how this does not end up with more misery and worse lifestyle all around.
No it's not.
It's a measure of what people want accomplished, are least interested in doing themselves, and feel capable of reviewing enough to delegate to a known lair.
To remove some of that bias, I'd recommend to get an independent body (probably some university) in and let them do the interpretation and write the article.
I just want people to see the tactic for what it is. I really like Claude Opus 4.6 but this just screams "marketing" to me. I wouldn't say it's wrong, it's good to have these discussions and I'd encourage AI companies to say what they have to say. I would say: more independent sources are needed (and not another AI company).
I can tell you the questions are biased from the start. That study has to be redone entirely.
Why not call it a "double-edged sword" or something else? Light and shade are opposites but not necessarily two products from the same tool. It just irks me.
And just keep scrolling, you can make it to the story eventually.
The quotes they have are really interesting to read. That's what I was hoping to get when I built mine.
4.0 MB transferred
I'm much more curious about the results of 80k people who don't use AI regularly.
01. Professional excellence 18.8%
02. Personal transformation 13.7%
03. Life management 13.5%
04. Time freedom 11.1%
05. Financial independence 9.7%
06. Societal transformation 9.4%
07. Entrepreneurship 8.7%
08. Learning & growth 8.4%
09. Creative expression 5.6%
I find this highly suspicious. I'm sure there would be at least 10% who respond "I want it to go away".
> These are active Claude users who'd already found enough value to keep using AI, and our interview asked first for positive visions for AI and then for concerns that would counter their vision.
It's like those recipe sites that have 5 pages of nice photos and background story and side tracks and whatnot as the author waxes verbose, so they need to put a 'Jump to recipe' button in so people don't just click 'Back' immediately.
Except this time for an article.
I can't tell if 'skip the junk' is good (junk can be skipped!) or bad (maybe this means there's too much junk on the page?)
The meme that's been going around that em dashes are an indicator of LLM output is completely invalid, and people who repeat it are really just outing themselves as the sort of people who don't actually read very much.
Is this incompetence or a deliberate error to indicate human authorship?
If the former then why aren't they using at least an AI to proof read? If the latter then what do anthropic think is wrong with AI written text?
I'm not going to claim I know this response was written by an AI, but it's very suspicious. I would like to hear about how Anthropic ensured that the survey responses were provided by real human beings using their own words.
Hell, anyone who’s been interviewed by a journalist can tell you they do it, too, sometimes to the point of changing the substance of what you said.
Who writes this unrealistic gibberish?
Not 81,000 as it says in the title. I know I'm being nitpicky, but I wouldn't round up to 81k. Surely the 'important number' in this case is 80, so you would round down to that. Then let the reader pleasantly discover you had interviewed ~500 more than you stated.
It's funny to me when someone does this sort of minor hyperbole that's verging on lying - you have to wonder what is going on.
What would you have done, in those circumstances though? Would you 'round up', overstating your case, like Anthropic did? Or would you ensure that you avoid the suggestion that you were misrepresenting the numbers?
> I'd started telling Claude about things I couldn't even tell my partner. It felt like I was having an emotional affair.
> I worked with an AI to prepare educational materials for my eldest child—asking the AI to work as both tutor and curriculum expert. We received [my child’s] report yesterday, he was graded as either ‘Above’ or ‘Well Above’ standard in every academic area he studies.
So many concerning quotes that read like AI is a workaround for things that are not working right at scale in society.
I worry (1) AI workarounds will make it clear society can tolerate even more suck then (2) society will get worse to where AI is required to cope then (3) AI will stop being subsidized and the poor will get wrecked.
"Electric lighting means I have time to read books." - omg society made people work during daylight! (This is a real issue in Africa btw, or at least was not very long ago.)
If AI leads to better medical diagnoses, great!