The worlds on fire. Part 2 :

And we should ignore the alarm bells.

These companies want us to anthropomorphize their software, so let's indulge in that fantasy. What type of person ends up like this? By tracing LLM companies, with all their opportunities and outcomes, over a person's life path, it paints a pretty tropey picture.

I came up with this analogy while trying to unpack what’s going on here to my wife. It started to feel less like market analysis and more like gossiping about “that” person from high school. We all have “that” person in our lives. LLMs are like “them”, but supercharged.

They’re like that wealthy kid who used their influence to bypass every normal social control and bully their way into situations where they were not welcome or qualified to be, and all while being able to wave away any accountability.

School Life

Starting right at the beginning, one of the first impacts of LLMs was how they were a corrupting influence in schools, serving as a bad universal-calculator and slot-machine that allowed students to fake assignments with ease.

This quickly spiraled into a culture of apathy for large segments of the student population. The result? Widespread and effortless short-cutting of learning outcomes has killed any faith many students and educators had in the current system. And the messaging by LLM boosters did not help.

Kids are not blind to the realities of the working world and they do not have a lot to look forward to. Why bother learning when the machine already “knows” every answer.

LLMs also went up to higher education and almost destroyed the institutions altogether. And not just for students, but for educators and administrators as well.

A foundation of higher learning is academic honesty through the avoidance of plagiarism and use of proper citation. !Tenets! that LLM companies were allowed to freely flaunt, and the loss of that institutional trust has cascaded through society. Generated assignments, briefs and answers. Generated lesson plans. Generated assessments. Generated applications and automated reviews thereof. Textbooks. Studies. Quotes.

The point of a university degree is now being questioned, given how unstable the landscape is. It doesn’t exactly help when you have so many people who still need to pay off their tuition fees but are actively looking for a job.

Because of LLM-fueled layoffs.

I can't imagine starting college in the middle of Lockdown and watching the future you were promised just evaporate away as you get closer to graduation.

The cherry on top of the crap cake that is AI Education is that it hasn't been proven to create better learning comprehension in students yet. It hasn’t been in use or studied long enough to prove any outcomes.

Anyone who truly understands education will tell you that delivering answers is a secondary outcome.

The value of education stems from the process of creating the answers. For instance, finding your voice in writing essays, correcting weaknesses through assessments, discovering fields of interest through safe engagement environments. It is all about creating those neural frameworks to work through math, science, geography and every other discipline; then interweaving those processes in individualistic ways to form unique but still factual ideas. But the most important part is collaborating with people in building the social skills and structures needed to achieve a collective good. Knowing how and when to push your points, compromise and concede for the greater good.

I'm not going to pretend that global education frameworks don't fall short by becoming increasingly outcomes and metrics based. Some first-world nations are decades behind on tech adoption and are not equipping students for the realities of modern life. But I don't think LLMs are any better.

Because the technology is so new in schools and industry, there is no structure or method to its implementation. Hell, most people don't even know what it’s for. I've writing a bloody novel about this and can’t really tell you what it’s for.

It's too early to determine the long- term outcomes of using this tech has on students and their performance. However, in this same short time there have been ample indications that it is detrimental to people’s abilities to meet current standards.

It's plain to see how inaccurate these tools have been and how caustic they are to cognitive ability and critical thinking skills, especially in children.

Working World

In spite of the dumpster fire they have made of education, LLMs have started to infiltrate the working world.

All of a sudden, AI was dumped front and center of every workflow. Software that was not intentionally selected by the user, that cannot be turned off or configured, that sends unknown amounts of data to unknown processes over the internet. Software that generates erroneous data that could easily disrupt business processes.

Is there a name for software like that? Or is it essentially just a virus?

Regardless, everyone was given a set of free incompetent interns. This massive technological development was swiftly met with... confusion? Resignation? Apathy?

I don't think anyone knows how to feel about it, because the interns are technically “free”.

But they aren’t particularly good at anything. They don’t even learn and improve as advertised. They just need to be babysat constantly.

The biggest impact is informal applications such as a web search replacement and a compiler and summarizer of emails.

There are people who have found a place for language models in a workflow as a sounding board or brainstorming tool. Fair enough, but I don't think many people can argue that they are mission critical, and I'm curious about how much people are willing to spend on that.

I wouldn't be surprised if the bulk of LLM usage was in HR and the bullshit hiring merry-go-rounds. LLM job boards and recruiters, LLM-written cover letters and resumes, LLMs doing all the screening of those applicants, LLM interviews and assessments, LLM rejection letters. The lynchpin of a valueless, circular business process. (I love how this industry rhymes with itself.)

And what about Vibe Code

Vibe Coding. Vibe Coding. Vibe. Code. The AI industry’s lone use case shining like a beacon in the maelstrom. What would I be allowed to say about Vibe Code as a washed-out, failed developer who hasn't touched production code in a decade...?

That is its own uninformed industry issue.

But I can confidently say that I might have been too hard on Agile being the worst thing to happen to computers.

Back to work

When LLMs climbed the ladder and made it for corporate consideration, they were fast-tracked right up to the top and... let’s just say that the 95% failure study is fairly well circulated at this point.

And I’m talking about the actual study detailing that companies need to identify specific goals and outcomes that demand use of this specific technology, while also having a clear plan on how to achieve it and hold the vendors accountable to their promises and commitments.

Not the many “interpretations” that insist that the solution is more AI spend, training and adoption to overcome the problems caused by poor AI spending, training and adoption.

Organizations that successfully cross the GenAI Divide approach AI procurement differently, acting more like BPO clients than SaaS customers. They demand deep customization, drive adoption from the frontlines, and hold vendors accountable to business metrics. The most successful buyers understand that crossing the divide requires partnership, not just purchase.

The sense I’m getting is that corporate AI initiatives are just gambling – as if AI is piloting the last helicopter out of the tech boom, and there are only a few seats left to become the next Facebook or Google.

There's a sense of urgency to put all your money on a one-trick pony. Just like how a “growth-minded” CEO will start with an inflated salary and stock options and a bonus structure and a golden parachute before the business even makes its first profit because it's not like corporates have anything else to spend that money on.

Would you trust people like this?

We all know people like this.

Someone who looks the part and we are assured can do the job by people paid to say so.

People who are given every opportunity and incentive to succeed simply because they had every opportunity and incentive to succeed. The type of person that can confidently absorb all praise and reward from success while effortlessly passing along the cost and consequence of failure. After all, they could pay to be where they are.

Money talks. It speaks volumes. Loud enough to create its own narrative and drown out critics.

But money doesn't actually do anything. It's just a fancy battery. A storage of potential energy.

AI very often just wastes that potential and then demands more.

There's nothing wrong with messing up, even at seemingly monumental scales. That’s why we have checks, structures and standards. Barriers that keep customers and companies safe by filtering the potential for mistakes. Barriers that you overcome with results. Success, even in spite of potential limitations.

But why, for LLMs, do we constantly ignore and even reward failure? Virtue by scope of potential, even in spite of repeat failures.

Every opportunity and easy win just gone up in smoke. Ignore the lawsuits. The artistic endeavors booed by audiences. The high-profile blunders. The rollbacks and broken rules. Just slide it all under the rug and let it fail upwards until the creators are eating dinner with the president and “sponsoring” his inauguration and ballroom for no particular reason.

The best part is that LLMs are not just the embodiment of incompetence. This is Incompetence as a Service. An enforced service, to be more precise.

They didn't mess up just one school or company. No no. They are in a position to screw with all of them. When I said above that they were forced everywhere, it was not just software. Fridges and toasters. Toys and light-switches. This list grows by the day.

If the apocalypse happens because of “AI”, it will most likely be because of a mistake, not malice. Maybe Microsoft pushes an LLM-written patch that screws up a math library. Or the LLM summary of a critical email misunderstands basic facts.

Microsoft is part way to developing an “Agentic Operating System” based on LLMs that feels as if it was programmed by LLM. People are starting to notice and their patience is wearing thin. The head of Windows knows this.

The team (and I) take in a ton of feedback. We balance what we see in our product feedback systems with what we hear directly. They don’t always match, but both are important. I've read through the comments and see focus on things like reliability, performance, ease of use and more. But I want to spend a moment just on the point you are making, and I’ll boil it down, we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these paint points and others in detail, because we want developers to choose Windows. We know words aren’t enough, it’s on us to continue improving and shipping. Would love to connect with you about what the team is doing to address these areas if you are open to it.

It sounds like cutesy corporate talk to do better, but it's not some theoretical PR exercise. We're at the point where users can't trust an operating system, developers can't trust their platform, and power users do not know how to service the product.

That's how you kill a company. But what makes this situation far worse is that we have a running theme of concentrated influence., because that suicidal company has a finger in almost everything and can take it all down with them. The worst part is that it seems like every person, organization, and government is powerless to prevent this insanity from causing more harm.

“You can't stifle innovation.”

“We're acting in shareholder interests.”

“We need to stop other people from making the bad AI.”

“If we stop spending money, then there's no economy because everything is scary right now without any real growth markets; but we need to show profit and growth otherwise capitalism will get angry and will smite us with a plague of bears.”

“I'm going to tell Donald Trump and he'll add you to the naughty list and tariff you if you don't play nice with his special friends.”

So whats the point?

Everyone I've run this project past asks the same things.

Why are you creating this? Who is it for? Do we even need this? Do you have a plan for what this looks like? Why should anyone care, as it's not as if you've got a profile or public reputation for success? Shouldn't you be fixing the prices on the online store? Do you hate the economy?

And they’re all perfectly fair and valid. I've put you through five thousand words and that's a lot of reading in a doom scroll world.

Or, perhaps you're just skimming to the end looking for a tl,dr, in which case. TLDR: Hi. You just missed some long overdue but measured cathartic decompression. We're going to be adults now.

I'll answer the above with a question that burns in my soul: Why the fuck do we not ask the AI companies these questions too? At worst, I'm just wasting some of your time and you could simply stop whenever you want and be just fine. Hell, I'm the only one out of pocket here.

Sorry, still had some left in the tank.

I wanted to create something by exploiting a gap in the market and offering real value – only to then realize that there is no value to be found.

As for the question of who we're writing for, this project is for the growing number of people who know something is wrong but can't articulate what or why. This part probably deserves its own rant and it feels very justified to do so right now. I'm writing this revision after NVIDA’s very weird earnings report where the good(?) numbers did not stem the dip they have been seeing through November.

My guess is that it may have something to do with “Who is OpenAI’s auditor?” trending on the day and I also saw “Cisco 01” in the linkmap. Why are we being visited by The Ghost of Bubbles Past?

There's also the question of why a company would have so much riding on quarterly reports. It can't be healthy or responsible to require positive outlooks every reporting period.

Regardless, the doubt is now outweighing the hype, but people lack the understanding and language to describe what is going on. I said that modern education has not equipped students for the modern technology landscape. This industry preys on the victims of this failure, and it’s just so hard to communicate a precise perspective about this because emotion, deep technical knowledge, personal principals and history make a very unappealing cocktail. It's why I’ve moved some of the denser stuff to their own sections.

But at the same time, it appears that companies are all in on AI initiatives. The job cut numbers are no secret. The scope is likely in line with most other technology revolutions like automation, globalization, digitization and the subsequent global networking of those tools. But unlike other revolutions that have rolled out over years of incremental experimentation, discovery, development and competition; AI is an out-the-box leap.

We are starting with the conclusion that there WILL BE efficiency gains. There WILL BE infinite scale. There WILL BE a net good for humanity. We’re working from a speculative outcome. Making a gamble.

I'm personally doing this because I enjoy doing business. I started looking into this new industry because, like a lot of people, I wanted to get in on the ground floor. People skipped out on the internet and mobile and social media and every big thing because it seemed to have been the smart choice. Now it feels as if we can't afford to be smart and might as well put all our eggs in this blender.

I will be the first to admit that I do not understand every single factor of this technology. I do not know the right way to do this. There's a lot going on, and this little rant was simply a way of saying that people are not crazy for seeing so many problems that have been snowballing for years. All I could do was learn and I was lucky to have just enough background knowledge and experience to no be completely lost.

Other people can’t say or do the same.

And the foundational knowledge is not even scratching the surface of what is happening.

I didn't even mention the military technology, the psychosis issue, the data centers and the questionable procurement of datasets, or how the tech even works.

Did we ever define AI? Or just the Intelligence part?

We should really start there.