The worlds on fire.
Part 1 : So lets just make AI porn.
I’ve been working on a project to break down the whole AI/LLM mess for a while now. Started as a primer to the people I'm consulting for. Simple guide to showcase the options they can consider and plan for. But the scope snowballed into a wiki style Launchpad laying out all the options, features, benchmarks, tools and strategies anyone could use to integrate AI technology into a small business. There wasn't a utility like that at the time and it still doesn't exist in a comprehensible package. We just have vague leader-boards and the media screaming “AIAIAIAIAIAIAI” like they have a quota.
I wanted to do this because I believe I have a unique perspective to offer and thought to leverage that into future opportunities. My old job involved a run in with Attention Networks when we tried to make sense of the mountains of junk created in the BIG DATA and IoT craze. Every consultant on earth would tell executives BIG DATA was going to be the future currency and that they should not delete a byte of it. But got very fuzzy when operations needed answers regarding how and why. Just trust the unproven process. Sound familiar?
This all happened way back in 2018 where one of my roles was to develop integrations between the companies legacy software suite and emerging technologies. I stumbled on googles Deep Leaning analytics and it looked like an interesting system for edge analytics and to explore ideas of procedural analytics. It was honestly shocking how quickly we could stack components in a trench-coat and have it pretend to be an analytics engine (mostly TensorFlow, a NoSQL DB, a NodeRed dashboard and some API calls). But anyone could develop outlier rules and figure out temporal or spatial correlations with neat, relational data and none of the considerations needed to be production ready.
It fell apart when we introduced the unstructured mess of a Data Lake and live data streams. Long story short: the results were crap. This technology on it's own was not in the right space to consistently provide any new insights. I think the problem with data models has nothing to do with compute power and data quantity. It's actually type of problem that would likely be made worse by brute forcing excess data and compute into the process. It's the eternal problem of defining “good” data.
Complete data is not the same as perfect data. All the data in the world does not qualify as perfect data. There is no such thing as perfect data or a perfect model. To have perfect data, you need perfect processes in a perfect operating environment. Anyone that had been in an operations role from military to medical to merchants will all tell you the same thing: the real world doesn't care about rules and what “should” happen. Google, corporate KPI's and content algorithms are the shining example of this. The programmatic and narrow definition of “quality” only lead to the metrics that are used being abused or absorbing attention to the detriment to purpose of the system.
Has “search engine optimization” improved the quality of information on the internet? Has the executive obsession with “shareholder value” improved the quality of products/services and ensured the long term sustainability of companies/industries? Has the fine tuning of individualized content improved peoples media consumption and enjoyment? To paraphrase an often misattributed and misunderstood quote: “Whatever is measured is managed.” It's a quote that takes on a whole new meaning in context:
“What gets measured gets managed — even when it’s pointless to measure and manage it, and even if it harms the purpose of the organization to do so”. V. F. Ridgway, 1956
Transformer models and machine learning system makes three dangerous assumptions: Valuable information outweighs bad data. The user is asking the right questions. People will not adapt to maximize their personal gain from the new system.
The only way I believe we really judge these things is by using human judgment.
If anyone had infinite time and money and super human intellect, could they have figured it out? Maybe. But I doubt it. But as fun as the experimentation was, at the time my job revolved around billable hours and deliverables.
What changed
I've been out of the tech industry a while and was excited to see what major breakthrough happened to make this technology the foundation of AI. The whole idea of AI was wild to me because I always considered Neural Networks and Machine Learning to be more illustrative and less definitional. But chatGPT seemed to pop out of nowhere boasting INTELLIGENCE. As skeptical as I was, this is the people who had infinite resources to throw at the problem. Judging by the names and dollar amounts associated with the project, combined with rave reviews and interesting (concerning) talk from educated professionals; it did feel like there was something real going on.
You can imagine my surprise when I saw the solution was a combination of all the things that got me out of tech. Startup bullshit. Fake it till you IPO. And it didn't help that one of the places I was trying to use Data Modeling like this was with a startup that seemed more interested in the aesthetics of the prototype dashboard than the factuality of what it showed. The genius solution to power AI didn't seem any more advanced than: Ctrl A on the entire internet, copy, paste. Didn’t take a lot of capability testing with LLMs for me to devolve into a manic conspiracy nut, trying to figure out what the hell was going on with this industry.
Nothing about this felt right. All the grand promises and the irrational ideas of intelligence and IP theft and the fact that they were charging money for this are distasteful things on its own. But the fact that it didn't even work really bugged me. It “could” work... Occasionally. Sometimes even impressively. But it did fail catastrophically far too often. And it was often sneaky failures too. Failures that could pass muster and often need a forensic breakdown to identify.
And it was also somehow the users fault for these errors emerging. For not “prompting” correctly. For not burning enough tokens. Not paying for the better model or hardware. Not blindly defending the company and shilling for their software to be adopted and improved. For not understanding the inner workings of a machine without proper enablement, documentation or honest communication.
It's no secret that's the future big tech seems to want. It made sense after the last twelve years of runaway enshitiflation. Maximum profit, minimum effort, zero accountability. But to see it so plainly paraded as an aspirational future made me want to dig into the specifics and try to communicate them. So my scope for this project once again shifted to demystifying what is happening. That neo-liberal fantasy of “just giving people the right information” and trust they will make the right choices.
The plan was to avoid overly technical details because regular users don’t care. And at no point did I want to mention some abstract future outcome of AGI or doomsday events or a utopia where everything is rosy. That could be next month or in 50 years. People have bills today. I wanted to just present the simplified facts as they have been observed. People, tools, quotes, promises, wins, and failures.
But facts take work. A lot of work. Work to research claims and map events and follow an insane quote to a single line in a 4 hour stream-of-bullshit podcast and track down sources and survey real users experiences and read papers and just test everything... .The best part is that the crap keeps flowing and it's impossible to keep up. In the time it took me to define the AI Leaderboards and Benchmarks and explain why it hardly matters, there were a dozen new major claims and four new services and a few thousand layoffs and a company changed their internal LLM use policy twenty times. In the time it took to define a failure of the model (like a Random Word Test or count to a million), there was new model and a roll back and multiple updates.
To top it off, the deeper you look, the worse it got. It fueled a growing dread that there's no point to any of this. We all know sense and reasons never wins out when people are paid to be unreasonable and peddle nonsense. Or when people are just too desperate and exhausted to put any more effort into rationalizing a bad situation. You see it with the way most skeptics of this industry were treated. And you could argue that this is just the way things are. That my critiques a invalid because this field is just moving too fast and destined for success. But what if it isn't?
It's the type of question people didn't want to entertain. You can dish out every possible fact or informed opinion or financial projection only to have it fall on deaf ears. Spell out the illegality and hypocrisy and it won't matter to the people that need to hear it. You're just bringing down the good vibes. Line go up. Money become more money. Everything is good. Ignore the truck of TNT going towards the stack of red barrels with fire symbols on it.
I don't understand vibes.
So after months of spinning tires, I needed to admit that it’s not possible to complete this side project with the scattered few hours I can spare a month. That left a choice: I could throw myself into it completely. Let it consume all my free time and become even more of a delight to be around. Or set it aside and unplug from the whole thing. Let the bubble fizzle out or explode or inflate to infinity and be happy watching other people get paid to record it happen. Suffice it to say, I got to have a few very peaceful weeks. Was better for me and my mental health and my family.
Then I checked back and openAI decided to do porn.
Not mental health guardrails. Not reliable hallucination detection and prevention. Not an SMME (small, medium, micro enterprises) toolkit with industry specific workflows and official plugins for common software. Not a clear Service Level Agreement for smaller users, setting Quality Assurance standards to work against or long term price assurances around which one can make strategic business decisions. Not any real value generator for their business or customers.
Porn.
I don't really care anymore.
Not in the “I don’t care anymore and will spend thousands of words telling you exactly why and how much I don’t care” way. This is so far beyond reason now that I don't care about being reasonable. Or coherent. I'm going to embrace my ultimate fate and become that old man yelling at clouds to get off his lawn. Because I care about the slow death of good technology and tech literacy. And the fact that people are hurting because of this mess. And that its going to be regular people and our kids that are probably going to have to deal with the consequences of this screw up.
Also feels like I need to mention that I don't really care about the adult content industry. No judgment from my side about most of the people that work there. My distaste here is specific to LLM companies making these types of business decisions.
Because I was sure the porn thing should be bigger news. You can imagine my frustration when most people don't have the energy to spend two thoughts on it. And the people who could spare a single thought just did as a half hearted obligation. It is admittedly small news next to the inbreeding circle-jerk mess of leveraged financing that openAI seems to be cooking up with everyone. And it was also announced next to the release of yet another unwieldy LLM based browser and a week before earnings calls.
But look at it like this: the company that commands a significant share of global wealth allocation (i.e., a considerable percentage of most peoples retirement funds) and has access to the most advanced technology they say humans have; is resorting to the worlds oldest profession. There's so much money and so many peoples lives on the line and there isn't a peep about it in any tech or finance media.
Now I’m hardly the most qualified person to speak on the worlds oldest profession. After all, I am a gamer. I just have a funny feeling.
A feeling that openAI is not doing this in the name of promoting a healthy sex-positive culture that’s working towards the production of ethical adult content in which creators are fairly compensated and consumers are encouraged to develop realistic expectations and appropriate sexual habits where the art can play a part in couples fostering more fulfilling relationships.
It’s just a hunch.
But what if they are in financial trouble and are just doing porn for the money?
Do people in financially stable positions resort to this sort of business?
There are stories of students dropping out of med school, PhD courses and even personnel in lucrative corporate jobs. All opting for this type of content production because they need the money or independence. It's probably a lot more these days given the state of everything. Not a lot of people are jumping to socially distasteful work because they are rolling cash.
And AI companies are rolling in money right?It inspires so much confidence when the CEO is asked about revenue and their answer is all about what the company “will do” and not what “is happening”. It's perfectly normal for healthy companies to talk about how they definitely don't want a bailout.
It can't be overstated so I will say it again. The poster-companies of this industry are not sustainable by selling their products and services in spite of all the money, tech, access, connections, hype and investor patience at their disposal. Even NVidia wont be able to maintain chip demand. Useful life of these things have been creeping up due to under-utilization and I can't imagine that there is a thriving second-hand market. The main focus at the moment is on fluffing up the stock market by blowing life into stagnant old companies. Insane deals that seem to be valuated through the complex forecasting equation: 10d20 x 1 billion. They also seem intent on retail advertising because nothing says innovation more than doing the same bad solution of the last decade. And they love getting in bed with rich or powerful sugar daddies, all while encouraging parasocial fan relationships where irrationally attached people can't help but throw money at them. And now they want to sell sexual fantasies.
More important question based on that assessment. Do Tech CEOs just want to be performers in the porn industry? Because it would have saved the world a lot of pain to allow the super rich to be their true selves and hook them up with an OnlyFans deal. It's a line of work they actually have an aptitude for. Being chronically online, all the crypto scams, selling themselves out to the highest bidder and just constantly shilling crap that no one wants. Hell, Elon Musk was faking his PoE2 game-play to get gamers attention. It's hard not to see this situation like that.
The life of AI.
(Before they fell off and needed to go into porn to pay the bills)
These companies really want us to anthropomorphize their software, so let's indulge in that fantasy. What type of person ends up like this? Because tracing LLM companies with all their opportunities and outcomes over a persons life path, it paints a pretty tropey picture.
I landed on this analogy while trying to unpack whats happening here to my wife. It started to feel less like market analysis and more like we were gossiping about “that” person from high school. We all have “that” person in our life. LLMs are “them”, but supercharged to an insane degree.
That wealthy kid who used their influence to bypass every normal social control and bully their way into situations where they were not welcome or qualified to be. All while being able to wave away any accountability.
School Life
Starting right at the beginning. One of the first impacts of LLM's was how they were a corroding influence at schools. It's a bad universal-calculator/slot-machine that allowed students to fake assignments with ease. That quickly created a culture of apathy for large segments of student population. Widespread and effortless short-cutting of the learning process has killed any faith many students and educators had in the value of an education. And the messaging by LLM boosters did not help. Kids are not blind to the realities of the working world and do not have a lot to look forward to. Now you got CEOs and fanboys trying to make the case that we just need to, not only ignore the damage caused by their spawn, but embrace it into the education system. Why bother leaning when the machine already “knows” answers? Why bother with anything when everything is about to change. How, when, why? Doesn't matter. Just accept it.
LLM's also went up to higher education and almost destroyed the institution all together. Not just for students, but for educators and administrators as well. A foundation of higher learning is academic honesty through the avoidance of plagiarism and proper citation. Tenants that LLM were allowed to freely flaunt and the loss of institutional trust has cascaded through academic culture. Generated assignments briefs and answers. Generated lesson plans. Generated assessments. Generated applications and automated reviews of them. Textbooks. Studies. Quotes. The point of a university degree is under question given how unstable the landscape is. It doesn’t exactly help when you have so many people that still need to pay it off but are actively looking for a job. Because of LLM fueled layoffs. I can't imagine starting college in the middle of Lockdown and watching everything happen as you get closer to graduation.
The cherry on top of the crap cake is that is that AI Education: it can’t have been proven to create better learning comprehension in students yet. Anyone who truly understands education will tell you that delivering answers is a secondary outcome. The value come in the process of creating the answer. Finding your voice when writing essays. Discovering fields of interest through safe engagement environments. Creating those neural frameworks to work through maths, science, geography and every other discipline; and then interweaving those processes in individualistic ways to form unique but still factual factual ideas. And then collaborating with people to work towards and building social skills and structures to work towards a collective good.
I'm not going to pretend that global education frameworks fall short by becoming increasingly outcomes and metrics based. Some firs-world nations are decades behind on tech adoption and are not equipping students for the realities of modern life. But I don't think LLM's are any better. Because the technology is so new in schools and industry, there is no structure or method to its implementation of LLMs and everyone was left hanging on for dear life. It's too soon to tell what the outcome of using this tech has on students and their performance. However, in the same short time there has been ample indications it is detrimental to students abilities to meet current standards. Simply because it's plain to see how inaccurate these tools have been and how caustic they are to cognitive ability and critical thinking skills, especially in children.
Working World
In spite of the dumpster fire they made of education, LLMs started to get in to the working world through high visibility integrations. Integrations with nearly every product on earth. We just woke up and it was slapped front and center of every workflow. Software that was not intentionally selected by the user. Software that can't be turned off or configured. Software that sends unknown amounts of data to unknown processes over the internet. Software that generates erroneous data that could disrupt business processes. Is there a name for software like that?
Regardless, everyone was given a set free incompetent interns. This massive technological development that is swiftly met with... confusion? Resignation? Apathy? I don't think anyone knows how to feel about it. Because the interns are technically “free”. But they aren’t particularly good at anything. They don’t even learn and improve as advertised. They just need to be babysat constantly. The biggest impact is informal applications as a web search replacement and a compiler and summarizer of emails. There are people that have found room for them in a workflow as a sounding board or brainstorming tool. Fair enough but I don't think a lot of people can argue that they are mission critical and I'm curious how much people are willing to spend on that.
I wouldn't be surprised if a bulk of LLM usage was in HR and the bullshit hiring merry-go-rounds. LLM job boards and recruiters. LLM written cover letters and resumes. LLMs doing all the screening of those applicants. LLM interviews and assessments. LLM rejection letters. The linchpin of a valueless circular business process. (I love how this industry rhymes with itself.)
Side Quest: Vibe Code
Vibe Coding. Vibe Coding. Vibe. Code. The AI industries lone use case shining like a beacon in the maelstrom. What would I be allowed to say about Vibe Code as a washed out failed developer that hasn't touched production code in a decade...
Firstly, I love the sentiment of it. That developers have been out there on the frontier for the last 50 years. Diligently toiling away in the code mines to shovel code by the truckload to power the modern world. Now we can rejoice, for our astute overlords hath bestowed upon us humble code monkeys the Infinite Code Machine. An enigmatic device that creates code from nought but a mountain of money and a small cities electricity. And now we can rest easy in the knowledge that society will never want for code again. Now we just need to ... do something else to afford this age of abundance.
Honestly, I think vibe code is the result of managing development by all the wrong metrics and is the natural outcome of DevOps and metric led development.
To be fair to all the Vibrators out there reaching the heights of programming efficiency I never will: I did not have the patience to “learn to do it properly”. In Python and Java it was competent at the things I'd offload on juniors, but I did not gel with the constant changing in style. But using to learn Rust did not go well. On top of very bad guidance, it did not help that Rust refuses to mask my mistakes like a normal programming language.
Also, I was never the type to code work well in collaborative environments. Most of my programming was always either as a solo contractor on bespoke systems or a consultant for a COTS software suite. So maybe there's some astronomical aptitude ceiling I didn't see. But it's not the top end of the industry that I'm worried about.
The number of “development plans” I've gotten are terrifying. Anyone that was a developer during the startup craze remembers the depths of bad software ideas. I have a Pavlovian Response to the sentence “I want to make an app” and because it goes in the mental wood chipper that clogged by crypto scams, dangerous miracle software and outright fraud. Now all the bad app ideas can be force fit into bad plans. Passable plans. Plans accompanied with code that can be run and even includes steps to maybe push to production. Plans with conflicting architectures and involving million dollar enterprise tools for a single users. Plans that a fly-by-night operation will be happy to deploy for a modest fee.
My code not improving can be a simple skill issue. Fair enough. My code wasn't remarkable anyway. There are people claiming to be anything from 20% – 2000% more efficient and I'm really happy that they have such clear cut metrics to gauge their knowledge work. But these tools are proliferating harmful to outright dangerous software.
There's a reason 90% of startups fail and why all the big idea people from the last decade are not jumping on the scene with their killer app. Theres also a reason why all those idea people all have the same handful of “AI powered services”.
Ideas are cheap. Infinitely reproducible, flexible, scalable and scalable. Ideas can fix every problem and are not bound by pesky things like limitations, budgets, laws and unforeseen consequences. Ideas can be anything you want them to be and everyone has them. With this abundance of supply, there are no million dollar ideas.
There are million dollar solutions. Brands. Products. To make that, you need to work with what you have to compromise and compete with reality. No idea survives contact with reality. Especially ones that are individualistic and uncompromising. Even if your idea can push through the budget and legal and technical barriers; no one can escape old man consequences.
I can also say that I might have been too hard on Agile being the worst thing to happen to computers.
Back to work
When LLM climbed the ladder and made it for corporate consideration. They were fast tracked right up to the top and... lets just say that the 95% study is fairly well circulated at this point. And I’m talking about the actual study detailing that companies need to identify specific goals and outcomes that demand this technology, have a clear plan to achieve it and hold the vendors accountable to their promises and commitments. Not the many “interpretations” that insist that the solution is more AI spend,training and adoption to overcome the problems caused by poor AI spending, training and adoption.
Organizations that successfully cross the GenAI Divide approach AI procurement differently, they act like BPO clients, not SaaS customers. They demand deep customization, drive adoption from the frontlines, and hold vendors accountable to business metrics. The most successful buyers understand that crossing the divide requires partnership, not just purchase.
The sense I’m getting is that right out the box, LLMs are not even reliable enough to minute a meeting or compile an email effectively. It can't even generate proper junk data without breaking several of the parameters set.
Would you trust someone like this
Because we all know a person like this. Or at least have an idea of them. Someone that looks the part and we are assured can do the job by people paid to say so. Hell, they probably think they can do the job by virtue of just having it. They paid for the qualification. They paid for the access. Money talks. Money says a lot.
So the second they are put under pressure, what happens?
They crash out (is that what the kids say?).
And there's nothing wrong with messing up at monumental scales. We all should have been incompetent screw-ups at some point or another. The consequences and shame sucks. One of my personal achievements was mixing up the windows between a local VM of a dev environment and the Dev remote connection... good times. The silver lining is that there's a limit to what one person can do and they either phase out as standards increase or they meet and exceed those standards.
But why, for LLM's, do we constantly ignore and even reward failure.
Every opportunity and easy win just gone up in smoke. Ignore the lawsuits. The artistic endeavors booed by audiences. The high profile blunders. The rollbacks and broken rules. Just slide it all under the rug and let it fail upwards until the creators are eating dinner with the president and “sponsoring” his inaugurationand ballroom for no particular reason.
The best part is that LLMs are not just the embodiment of incompetence. This is Incompetence as a Service and generously provided as Freemium Bloatware. We don’t just have one incompetent to babysit. You get a headache. And you get a headache. And everyone gets to weather incompetence induced headaches. Powered by subsidized, Hyperscaled data-centers and freely(forcefully) piped directly piped to every web interface on earth.
They didn't mess up one school or university or company or project. They are in a position to screw with all of them. They’re in every browser. And chat applications. And search engines and online stores. Fridges and toasters. Toys and buttons. Microsoft is most of the way to replacing their entire OS with an LLM that feels like it was programmed by LLM. They can ruin your business and not worry because they are already turning tricks online before they are fired. And it seems like every person, organization, and government is powerless to prevent any harm.
“You can't stifle innovation.” “We're acting in shareholder interests.” “We need to stop other people from making the bad AI.” “If we stop spending money, then there's no economy because everything is shit right now but we need to show profit growth capitalism gets angry and will smite us with a plague of bears.” “Donald Trump will add you to the naughty list and tariff you if you don't play nice with his special friends.”
If the apocalypse happens because of “AI”, I think it will most likely be because of a mistake, not malice. Maybe Microsoft pushes an LLM written patch that screws up a math library. Or the LLM summary of a critical email misunderstands some industry specific jargon.
If any regular human had to follow a similar trajectory through the world as an LLM company, they would have been either institutionalized or executed for being a threat to the species. But these companies have backing. Rules are different. Doesn't matter how many lives are ruined or ended. All that matters is that the money moves and the line goes up.
And it isn't even real money. LLMs can't even do the bare minimum of good enough to generate revenue. If they did, we would be bombarded with headlines about multi year, minimum token spend contracts. There would be dozens of case studies from every possible industry across the globe. Everyone should have multiple LLM powered side hustle for some passive income and a flood of new software to replace incumbent solutions.
But all we have to show for it is nonsense assurances for data centers contracts and chip sales. Almost like the physical assets are the only part of this industry that is a tangible deliverable.
Research models, coming soon. Lower compute costs... when we hit the large enough scale. Less hallucinations... with enough scale. AGI is around the corner... for years now. We need people to pay to figure out the use cases and applications because the creators sure as hell can't with free use of these tools. It's all tacit acknowledgements that none of these things can reliably perform business functions.
That begs the question: What can they actually do?