<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Its_still_today</title>
    <link>https://blog.itstoday.site/</link>
    <description></description>
    <pubDate>Wed, 08 Apr 2026 20:06:14 +0000</pubDate>
    
    <item>
      <title>The worlds on fire. Part 2 :</title>
      <link>https://blog.itstoday.site/part-2-and-we-should-ignore-the-alarm-bells?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[And we should ignore the alarm bells.&#xA;&#xA;These companies want us to anthropomorphize their software, so let&#39;s indulge in that fantasy. What type of person ends up like this? By tracing LLM companies, with all their opportunities and outcomes, over a person&#39;s life path, it paints a pretty tropey picture.&#xA;&#xA;I came up with this analogy while trying to unpack what’s going on here to my wife. It started to feel less like market analysis and more like gossiping about “that” person from high school. We all have “that” person in our lives. LLMs are like “them”, but supercharged.&#xA;&#xA;They’re like that wealthy kid who used their influence to bypass every normal social control and bully their way into situations where they were not welcome or qualified to be, and all while being able to wave away any accountability.&#xA;&#xA;!--more--&#xA;&#xA;School Life&#xA;&#xA;Starting right at the beginning, one of the first impacts of LLMs was how they were a corrupting influence in schools, serving as a bad universal-calculator and slot-machine that allowed students to fake assignments with ease.&#xA;&#xA;This quickly spiraled into a culture of apathy for large segments of the student population. The result? Widespread and effortless short-cutting of learning outcomes has killed any faith many students and educators had in the current system. And the messaging by LLM boosters did not help.&#xA;&#xA;Kids are not blind to the realities of the working world and they do not have a lot to look forward to. Why bother learning when the machine already “knows” every answer. &#xA;&#xA;LLMs also went up to higher education and almost destroyed the institutions altogether. And not just for students, but for educators and administrators as well.&#xA;&#xA;A foundation of higher learning is academic honesty through the avoidance of plagiarism and use of proper citation. !Tenets! that LLM companies were allowed to freely flaunt, and the loss of that institutional trust has cascaded through society. Generated assignments, briefs and answers. Generated lesson plans. Generated assessments. Generated applications and automated reviews thereof. Textbooks. Studies. Quotes.&#xA;&#xA;The point of a university degree is now being questioned, given how unstable the landscape is. It doesn’t exactly help when you have so many people who still need to pay off their tuition fees but are actively looking for a job.&#xA;&#xA;Because of LLM-fueled layoffs.&#xA;&#xA;I can&#39;t imagine starting college in the middle of Lockdown and watching the future you were promised just evaporate away as you get closer to graduation.&#xA;&#xA;The cherry on top of the crap cake that is AI Education is that it hasn&#39;t been proven to create better learning comprehension in students yet. It hasn’t been in use or studied long enough to prove any outcomes.&#xA;&#xA;Anyone who truly understands education will tell you that delivering answers is a secondary outcome.&#xA;&#xA;The value of education stems from the process of creating the answers. For instance, finding your voice in writing essays, correcting weaknesses through assessments, discovering fields of interest through safe engagement environments. It is all about creating those neural frameworks to work through math, science, geography and every other discipline; then interweaving those processes in individualistic ways to form unique but still factual ideas. But the most important part is collaborating with people in building the social skills and structures needed to achieve a collective good. Knowing how and when to push your points, compromise and concede for the greater good.&#xA;&#xA;I&#39;m not going to pretend that global education frameworks don&#39;t fall short by becoming increasingly outcomes and metrics based. Some first-world nations are decades behind on tech adoption and are not equipping students for the realities of modern life. But I don&#39;t think LLMs are any better.&#xA;&#xA;Because the technology is so new in schools and industry, there is no structure or method to its implementation. Hell, most people don&#39;t even know what it’s for. I&#39;ve writing a bloody novel about this and can’t really tell you what it’s for.&#xA;&#xA;It&#39;s too early to determine the long- term outcomes of using this tech has on students and their performance. However, in this same short time there have been ample indications that it is detrimental to people’s abilities to meet current standards.&#xA;&#xA;It&#39;s plain to see how inaccurate these tools have been and how caustic they are to cognitive ability and critical thinking skills, especially in children.&#xA;Working World  &#xA;&#xA;In spite of the dumpster fire they have made of education, LLMs have started to infiltrate the working world.&#xA;&#xA;All of a sudden, AI was dumped front and center of every workflow. Software that was not intentionally selected by the user, that cannot be turned off or configured, that sends unknown amounts of data to unknown processes over the internet. Software that generates erroneous data that could easily disrupt business processes.&#xA;&#xA;Is there a name for software like that? Or is it essentially just a virus?&#xA;&#xA;Regardless, everyone was given a set of free incompetent interns. This massive technological development was swiftly met with... confusion? Resignation? Apathy?&#xA;&#xA;I don&#39;t think anyone knows how to feel about it, because the interns are technically “free”.&#xA;&#xA;But they aren’t particularly good at anything. They don’t even learn and improve as advertised. They just need to be babysat constantly.&#xA;&#xA;The biggest impact is informal applications such as a web search replacement and a compiler and summarizer of emails.&#xA;&#xA;There are people who have found a place for language models in a workflow as a sounding board or brainstorming tool. Fair enough, but I don&#39;t think many people can argue that they are mission critical, and I&#39;m curious about how much people are willing to spend on that.&#xA;&#xA;I wouldn&#39;t be surprised if the bulk of LLM usage was in HR and the bullshit hiring merry-go-rounds. LLM job boards and recruiters, LLM-written cover letters and resumes, LLMs doing all the screening of those applicants, LLM interviews and assessments, LLM rejection letters. The lynchpin of a valueless, circular business process. (I love how this industry rhymes with itself.)&#xA;And what about Vibe Code&#xA; &#xA;Vibe Coding. Vibe Coding. Vibe. Code. The AI industry’s lone use case shining like a beacon in the maelstrom. What would I be allowed to say about Vibe Code as a washed-out, failed developer who hasn&#39;t touched production code in a decade...?&#xA;&#xA;That is its own uninformed industry issue.&#xA;&#xA;But I can confidently say that I might have been too hard on Agile being the worst thing to happen to computers.&#xA;Back to work&#xA; &#xA;When LLMs climbed the ladder and made it for corporate consideration, they were fast-tracked right up to the top and... let’s just say that the 95% failure study is fairly well circulated at this point.&#xA;&#xA;And I’m talking about the actual study detailing that companies need to identify specific goals and outcomes that demand use of this specific technology, while also having a clear plan on how to achieve it and hold the vendors accountable to their promises and commitments.&#xA;&#xA;Not the many “interpretations” that insist that the solution is more AI spend, training and adoption to overcome the problems caused by poor AI spending, training and adoption.&#xA;&#xA;  Organizations that successfully cross the GenAI Divide approach AI procurement differently, acting more like BPO clients than SaaS customers. They demand deep customization, drive adoption from the frontlines, and hold vendors accountable to business metrics. The most successful buyers understand that crossing the divide requires partnership, not just purchase.&#xA;&#xA;The sense I’m getting is that corporate AI initiatives are just gambling – as if AI is piloting the last helicopter out of the tech boom, and there are only a few seats left to become the next Facebook or Google.&#xA;&#xA;There&#39;s a sense of urgency to put all your money on a one-trick pony. Just like how a &#34;growth-minded&#34; CEO will start with an inflated salary and stock options and a bonus structure and a golden parachute before the business even makes its first profit because it&#39;s not like corporates have anything else to spend that money on.&#xA;Would you trust people like this?&#xA;&#xA;We all know people like this.&#xA;&#xA;Someone who looks the part and we are assured can do the job by people paid to say so.&#xA;&#xA;People who are given every opportunity and incentive to succeed simply because they had every opportunity and incentive to succeed. The type of person that can confidently absorb all praise and reward from success while effortlessly passing along the cost and consequence of failure. After all, they could pay to be where they are.&#xA;&#xA;Money talks. It speaks volumes. Loud enough to create its own narrative and drown out critics.&#xA;&#xA;But money doesn&#39;t actually do anything. It&#39;s just a fancy battery. A storage of potential energy.&#xA;&#xA;AI very often just wastes that potential and then demands more.&#xA;&#xA;There&#39;s nothing wrong with messing up, even at seemingly monumental scales. That’s why we have checks, structures and standards. Barriers that keep customers and companies safe by filtering the potential for mistakes. Barriers that you overcome with results. Success, even in spite of potential limitations.&#xA;&#xA;But why, for LLMs, do we constantly ignore and even reward failure? Virtue by scope of potential, even in spite of repeat failures.&#xA;&#xA;Every opportunity and easy win just gone up in smoke. Ignore the lawsuits. The artistic endeavors booed by audiences. The high-profile blunders. The rollbacks and broken rules. Just slide it all under the rug and let it fail upwards until the creators are eating dinner with the president and “sponsoring” his inauguration(https://www.opensecrets.org/trump/2025-inauguration-donors)) and ballroom for no particular reason.&#xA;&#xA;The best part is that LLMs are not just the embodiment of incompetence. This is Incompetence as a Service. An enforced service, to be more precise.&#xA;&#xA;They didn&#39;t mess up just one school or company. No no. They are in a position to screw with all of them. When I said above that they were forced everywhere, it was not just software. Fridges and toasters. Toys and light-switches. This list grows by the day.&#xA;&#xA;If the apocalypse happens because of “AI”, it will most likely be because of a mistake, not malice. Maybe Microsoft pushes an LLM-written patch that screws up a math library. Or the LLM summary of a critical email misunderstands basic facts.&#xA;&#xA;Microsoft is part way to developing an “Agentic Operating System” based on LLMs that feels as if it was programmed by LLM. People are starting to notice and their patience is wearing thin. The head of Windows knows this.&#xA;&#xA;  The team (and I) take in a ton of feedback. We balance what we see in our product feedback systems with what we hear directly. They don’t always match, but both are important. I&#39;ve read through the comments and see focus on things like reliability, performance, ease of use and more. But I want to spend a moment just on the point you are making, and I’ll boil it down, we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these paint points and others in detail, because we want developers to choose Windows. We know words aren’t enough, it’s on us to continue improving and shipping. Would love to connect with you about what the team is doing to address these areas if you are open to it.&#xA;&#xA;It sounds like cutesy corporate talk to do better, but it&#39;s not some theoretical PR exercise. We&#39;re at the point where users can&#39;t trust an operating system, developers can&#39;t trust their platform, and power users do not know how to service the product.&#xA;&#xA;That&#39;s how you kill a company. But what makes this situation far worse is that we have a running theme of concentrated influence., because that suicidal company has a finger in almost everything and can take it all down with them. The worst part is that it seems like every person, organization, and government is powerless to prevent this insanity from causing more harm.&#xA;&#xA;“You can&#39;t stifle innovation.”&#xA;&#xA;“We&#39;re acting in shareholder interests.”&#xA;&#xA;“We need to stop other people from making the bad AI.”&#xA;&#xA;“If we stop spending money, then there&#39;s no economy because everything is scary right now without any real growth markets; but we need to show profit and growth otherwise capitalism will get angry and will smite us with a plague of bears.”&#xA;&#xA;“I&#39;m going to tell Donald Trump and he&#39;ll add you to the naughty list and tariff you if you don&#39;t play nice with his special friends.”&#xA;So whats the point?&#xA;&#xA;Everyone I&#39;ve run this piece past asks the same things.&#xA;&#xA;Why are you creating this?&#xA;Who is it for?&#xA;Do we even need this?&#xA;Do you have a plan for what this looks like?&#xA;Why should anyone care, as it&#39;s not as if you&#39;ve got a profile or public reputation for success?&#xA;Shouldn&#39;t you be fixing the prices on the online store?&#xA;Do you hate the economy?&#xA;&#xA;And they’re all perfectly fair and valid. I&#39;ve put you through five thousand words and that&#39;s a lot of reading in a doom scroll world.&#xA;&#xA;Or, perhaps you&#39;re just skimming to the end looking for a tl,dr, in which case.&#xA;TLDR: Hi. You just missed some long overdue but measured cathartic decompression. We&#39;re going to be adults now.&#xA;&#xA;I&#39;ll answer the above with a question that burns in my soul: Why the fuck do we not ask the AI companies these questions too? At worst, I&#39;m just wasting some of your time and you could simply stop whenever you want and be just fine. Hell, I&#39;m the only one out of pocket here.&#xA;&#xA;Sorry, still had some left in the tank.&#xA;&#xA;I wanted to create something by exploiting a gap in the market and offering real value – only to then realize that there is no value to be found.&#xA;&#xA;The doubt is now outweighing the hype, but people lack the understanding and language to describe what is going on. I said that modern education has not equipped students for the modern technology landscape. This industry preys on the victims of this failure, and it’s just so hard to communicate a precise perspective about this because emotion, deep technical knowledge, personal principals and history make a very unappealing cocktail. It&#39;s why I’ve moved some of the denser stuff to their own sections.&#xA;&#xA;But at the same time, it appears that companies are all in on AI initiatives. The job cut numbers are no secret. The scope is likely in line with most other technology revolutions like automation, globalization, digitization and the subsequent global networking of those tools. But unlike other revolutions that have rolled out over years of incremental experimentation, discovery, development and competition; AI is an out-the-box leap.&#xA;&#xA;We are starting with the conclusion that there WILL BE efficiency gains. There WILL BE infinite scale. There WILL BE a net good for humanity. We’re working from a speculative outcome. Making a gamble.&#xA;&#xA;I&#39;m personally obsessing about this because I enjoy doing business. I started looking into this new industry because, like a lot of people, I wanted to get in on the ground floor. People skipped out on the internet and mobile and social media and every big thing because it seemed to have been the smart choice. Now it feels as if we can&#39;t afford to be smart and might as well put all our eggs in this blender.&#xA;&#xA;I will be the first to admit that I do not understand every single factor of this technology. I do not know the right way to do this. There&#39;s a lot going on, and this little rant was simply a way of saying that people are not crazy for seeing so many problems that have been snowballing for years. All I could do was learn and I was lucky to have just enough background knowledge and experience to no be completely lost.&#xA;&#xA;Other people can’t say or do the same.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<h2 id="and-we-should-ignore-the-alarm-bells" id="and-we-should-ignore-the-alarm-bells">And we should ignore the alarm bells.</h2>

<p>These companies want us to anthropomorphize their software, so let&#39;s indulge in that fantasy. What type of person ends up like this? By tracing LLM companies, with all their opportunities and outcomes, over a person&#39;s life path, it paints a pretty tropey picture.</p>

<p>I came up with this analogy while trying to unpack what’s going on here to my wife. It started to feel less like market analysis and more like gossiping about “that” person from high school. We all have “that” person in our lives. LLMs are like “them”, but supercharged.</p>

<p>They’re like that wealthy kid who used their influence to bypass every normal social control and bully their way into situations where they were not welcome or qualified to be, and all while being able to wave away any accountability.</p>



<h3 id="school-life" id="school-life">School Life</h3>

<p>Starting right at the beginning, one of the first impacts of LLMs was how they were a corrupting influence in schools, serving as a bad universal-calculator and slot-machine that allowed students to fake assignments with ease.</p>

<p>This quickly spiraled into a culture of apathy for large segments of the student population. The result? Widespread and effortless short-cutting of learning outcomes has killed any faith many students and educators had in the current system. And the messaging by LLM boosters did not help.</p>

<p>Kids are not blind to the realities of the working world and they do not have a lot to look forward to. Why bother learning when the machine already “knows” every answer.</p>

<p>LLMs also went up to higher education and almost destroyed the institutions altogether. And not just for students, but for educators and administrators as well.</p>

<p>A foundation of higher learning is academic honesty through the avoidance of plagiarism and use of proper citation. !Tenets! that LLM companies were allowed to freely flaunt, and the loss of that institutional trust has cascaded through society. Generated assignments, briefs and answers. Generated lesson plans. Generated assessments. Generated applications and automated reviews thereof. Textbooks. Studies. Quotes.</p>

<p>The point of a university degree is now being questioned, given how unstable the landscape is. It doesn’t exactly help when you have so many people who still need to pay off their tuition fees but are actively looking for a job.</p>

<p>Because of LLM-fueled layoffs.</p>

<p>I can&#39;t imagine starting college in the middle of Lockdown and watching the future you were promised just evaporate away as you get closer to graduation.</p>

<p>The cherry on top of the crap cake that is AI Education is that it hasn&#39;t been proven to create better learning comprehension in students yet. It hasn’t been in use or studied long enough to prove any outcomes.</p>

<p>Anyone who truly understands education will tell you that delivering answers is a secondary outcome.</p>

<p>The value of education stems from the <em>process</em> of creating the answers. For instance, finding your voice in writing essays, correcting weaknesses through assessments, discovering fields of interest through safe engagement environments. It is all about creating those neural frameworks to work through math, science, geography and every other discipline; then interweaving those processes in individualistic ways to form unique but still factual ideas. But the most important part is collaborating with people in building the social skills and structures needed to achieve a collective good. Knowing how and when to push your points, compromise and concede for the greater good.</p>

<p>I&#39;m not going to pretend that global education frameworks don&#39;t fall short by becoming increasingly outcomes and metrics based. Some first-world nations are decades behind on tech adoption and are not equipping students for the realities of modern life. But I don&#39;t think LLMs are any better.</p>

<p>Because the technology is so new in schools and industry, there is no structure or method to its implementation. Hell, most people don&#39;t even know what it’s for. I&#39;ve writing a bloody novel about this and can’t really tell you what it’s for.</p>

<p>It&#39;s too early to determine the long- term outcomes of using this tech has on students and their performance. However, in this same short time there have been ample indications that it is detrimental to people’s abilities to meet current standards.</p>

<p>It&#39;s plain to see <a href="https://arxiv.org/pdf/2508.01781">how inaccurate these tools have been</a> and how <a href="https://www.researchgate.net/publication/389130354_LLM_Safety_for_Children">caustic they are to cognitive ability and critical thinking skills, especially in children</a>.</p>

<h3 id="working-world" id="working-world">Working World</h3>

<p>In spite of the dumpster fire they have made of education, LLMs have started to infiltrate the working world.</p>

<p>All of a sudden, AI was dumped front and center of every workflow. Software that was not intentionally selected by the user, that cannot be turned off or configured, that sends unknown amounts of data to unknown processes over the internet. Software that generates erroneous data that could easily disrupt business processes.</p>

<p>Is there a name for software like that? Or is it essentially just a virus?</p>

<p>Regardless, everyone was given a set of free incompetent interns. This massive technological development was swiftly met with... confusion? Resignation? Apathy?</p>

<p>I don&#39;t think anyone knows how to feel about it, because the interns are technically “free”.</p>

<p>But they aren’t particularly good at anything. They don’t even learn and improve as advertised. They just need to be babysat constantly.</p>

<p>The biggest impact is informal applications such as a web search replacement and a compiler and summarizer of emails.</p>

<p>There are people who have found a place for language models in a workflow as a sounding board or brainstorming tool. Fair enough, but I don&#39;t think many people can argue that they are mission critical, and I&#39;m curious about how much people are willing to spend on that.</p>

<p>I wouldn&#39;t be surprised if the bulk of LLM usage was in HR and the bullshit hiring merry-go-rounds. LLM job boards and recruiters, LLM-written cover letters and resumes, LLMs doing all the screening of those applicants, LLM interviews and assessments, LLM rejection letters. The lynchpin of a valueless, circular business process. (I love how this industry rhymes with itself.)</p>

<h3 id="and-what-about-vibe-code" id="and-what-about-vibe-code">And what about Vibe Code</h3>

<p>Vibe Coding. Vibe Coding. Vibe. Code. The AI industry’s lone use case shining like a beacon in the maelstrom. What would I be allowed to say about Vibe Code as a washed-out, failed developer who hasn&#39;t touched production code in a decade...?</p>

<p>That is its own <a href="https://write.as/technical-details/technical-3-vibe-code">uninformed industry issue.</a></p>

<p>But I can confidently say that I might have been too hard on Agile being the worst thing to happen to computers.</p>

<h3 id="back-to-work" id="back-to-work">Back to work</h3>

<p>When LLMs climbed the ladder and made it for corporate consideration, they were fast-tracked right up to the top and... let’s just say that the <a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf">95% failure study</a> is fairly well circulated at this point.</p>

<p>And I’m talking about the actual study detailing that companies need to identify specific goals and outcomes that demand use of this specific technology, while also having a clear plan on how to achieve it and hold the vendors accountable to their promises and commitments.</p>

<p>Not the many “interpretations” that insist that the solution is more AI spend, training and adoption to overcome the problems caused by poor AI spending, training and adoption.</p>

<blockquote><p>Organizations that successfully cross the GenAI Divide approach AI procurement differently, acting more like BPO clients than SaaS customers. They demand deep customization, drive adoption from the frontlines, and hold vendors accountable to business metrics. The most successful buyers understand that crossing the divide requires partnership, not just purchase.</p></blockquote>

<p>The sense I’m getting is that corporate AI initiatives are just gambling – as if AI is piloting the last helicopter out of the tech boom, and there are only a few seats left to become the next Facebook or Google.</p>

<p>There&#39;s a sense of urgency to put all your money on a one-trick pony. Just like how a “growth-minded” CEO will start with an inflated salary and stock options and a bonus structure and a golden parachute before the business even makes its first profit because it&#39;s not like corporates have anything else to spend that money on.</p>

<h2 id="would-you-trust-people-like-this" id="would-you-trust-people-like-this">Would you trust people like this?</h2>

<p>We all know people like this.</p>

<p>Someone who looks the part and we are assured can do the job by people paid to say so.</p>

<p>People who are given every opportunity and incentive to succeed simply because they had every opportunity and incentive to succeed. The type of person that can confidently absorb all praise and reward from success while effortlessly passing along the cost and consequence of failure. After all, they could pay to be where they are.</p>

<p>Money talks. It speaks volumes. Loud enough to create its own narrative and drown out critics.</p>

<p>But money doesn&#39;t actually do anything. It&#39;s just a fancy battery. A storage of potential energy.</p>

<p>AI very often just wastes that potential and then demands more.</p>

<p>There&#39;s nothing wrong with messing up, even at seemingly monumental scales. That’s why we have checks, structures and standards. Barriers that keep customers and companies safe by filtering the potential for mistakes. Barriers that you overcome with results. Success, even in spite of potential limitations.</p>

<p>But why, for LLMs, do we constantly ignore and even reward failure? Virtue by scope of potential, even in spite of repeat failures.</p>

<p>Every opportunity and easy win just gone up in smoke. Ignore the lawsuits. The artistic endeavors booed by audiences. The high-profile blunders. The rollbacks and broken rules. Just slide it all under the rug and let it fail upwards until the creators are <a href="https://www.whitehouse.gov/articles/2025/09/president-trump-tech-leaders-unite-american-ai-dominance/">eating dinner</a> with the president and “sponsoring” his inauguration and <a href="https://time.com/7327752/trump-white-house-ballroom-funding-donors/">ballroom</a> for <a href="https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/">no particular reason</a>.</p>

<p>The best part is that LLMs are not just the embodiment of incompetence. This is Incompetence as a Service. An <em>enforced</em> service, to be more precise.</p>

<p>They didn&#39;t mess up just one school or company. No no. They are in a position to screw with <em>all</em> of them. When I said above that they were forced everywhere, it was not just software. Fridges and toasters. Toys and light-switches. This list grows by the day.</p>

<p>If the apocalypse happens because of “AI”, it will most likely be because of a mistake, not malice. Maybe Microsoft pushes an LLM-written patch that screws up a math library. Or the LLM summary of a critical email misunderstands basic facts.</p>

<p>Microsoft is part way to developing an <a href="https://support.microsoft.com/en-us/windows/experimental-agentic-features-a25ede8a-e4c2-4841-85a8-44839191dfb3">“Agentic Operating System”</a> based on LLMs that feels as if it was programmed by LLM. People are starting to notice and their patience is wearing thin. The head of Windows <a href="https://x.com/pavandavuluri/status/1989764300488245266">knows this.</a></p>

<blockquote><p>The team (and I) take in a ton of feedback. We balance what we see in our product feedback systems with what we hear directly. They don’t always match, but both are important. I&#39;ve read through the comments and see focus on things like reliability, performance, ease of use and more. But I want to spend a moment just on the point you are making, and I’ll boil it down, we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these paint points and others in detail, because we want developers to choose Windows. We know words aren’t enough, it’s on us to continue improving and shipping. Would love to connect with you about what the team is doing to address these areas if you are open to it.</p></blockquote>

<p>It sounds like cutesy corporate talk to do better, but it&#39;s not some theoretical PR exercise. We&#39;re at the point where users can&#39;t trust an operating system, developers can&#39;t trust their platform, and power users do not know how to service the product.</p>

<p>That&#39;s how you kill a company. But what makes this situation far worse is that we have a running theme of concentrated influence., because that suicidal company has a finger in almost everything and can take it all down with them. The worst part is that it seems like every person, organization, and government is powerless to prevent this insanity from causing more harm.</p>

<p>“You can&#39;t stifle innovation.”</p>

<p>“We&#39;re acting in shareholder interests.”</p>

<p>“We need to stop other people from making the bad AI.”</p>

<p>“If we stop spending money, then there&#39;s no economy because everything is scary right now without any real growth markets; but we need to show profit and growth otherwise capitalism will get angry and will smite us with a plague of bears.”</p>

<p>“I&#39;m going to tell Donald Trump and he&#39;ll add you to the naughty list and tariff you if you don&#39;t play nice with his special friends.”</p>

<h2 id="so-whats-the-point" id="so-whats-the-point">So whats the point?</h2>

<p>Everyone I&#39;ve run this piece past asks the same things.</p>

<p>Why are you creating this?
Who is it for?
Do we even need this?
Do you have a plan for what this looks like?
Why should anyone care, as it&#39;s not as if you&#39;ve got a profile or public reputation for success?
Shouldn&#39;t you be fixing the prices on the online store?
Do you hate the economy?</p>

<p>And they’re all perfectly fair and valid. I&#39;ve put you through five thousand words and that&#39;s a lot of reading in a doom scroll world.</p>

<p>Or, perhaps you&#39;re just skimming to the end looking for a tl,dr, in which case.
TLDR: Hi. You just missed some long overdue but measured cathartic decompression. We&#39;re going to be adults now.</p>

<p>I&#39;ll answer the above with a question that burns in my soul: <em>Why the fuck do we not ask the AI companies these questions too?</em> At worst, I&#39;m just wasting some of your time and you could simply stop whenever you want and be just fine. Hell, I&#39;m the only one out of pocket here.</p>

<p>Sorry, still had some left in the tank.</p>

<p>I wanted to create something by exploiting a gap in the market and offering real value – only to then realize that there is no value to be found.</p>

<p>The doubt is now outweighing the hype, but people lack the understanding and language to describe what is going on. I said that modern education has not equipped students for the modern technology landscape. This industry preys on the victims of this failure, and it’s just so hard to communicate a precise perspective about this because emotion, deep technical knowledge, personal principals and history make a very unappealing cocktail. It&#39;s why I’ve moved some of the denser stuff to their own sections.</p>

<p>But at the same time, it appears that companies are all in on AI initiatives. The job cut numbers are no secret. The scope is likely in line with most other technology revolutions like automation, globalization, digitization and the subsequent global networking of those tools. But unlike other revolutions that have rolled out over years of incremental experimentation, discovery, development and competition; AI is an out-the-box leap.</p>

<p>We are starting with the conclusion that there WILL BE efficiency gains. There WILL BE infinite scale. There WILL BE a net good for humanity. We’re working from a speculative outcome. Making a gamble.</p>

<p>I&#39;m personally obsessing about this because I enjoy doing business. I started looking into this new industry because, like a lot of people, I wanted to get in on the ground floor. People skipped out on the internet and mobile and social media and every big thing because it seemed to have been the smart choice. Now it feels as if we can&#39;t afford to be smart and might as well put all our eggs in this blender.</p>

<p>I will be the first to admit that I do not understand every single factor of this technology. I do not know the right way to do this. There&#39;s a lot going on, and this little rant was simply a way of saying that people are not crazy for seeing so many problems that have been snowballing for years. All I could do was learn and I was lucky to have just enough background knowledge and experience to no be completely lost.</p>

<p>Other people can’t say or do the same.</p>
]]></content:encoded>
      <guid>https://blog.itstoday.site/part-2-and-we-should-ignore-the-alarm-bells</guid>
      <pubDate>Mon, 08 Dec 2025 05:21:04 +0000</pubDate>
    </item>
    <item>
      <title>The world’s on fire. Part 1:</title>
      <link>https://blog.itstoday.site/the-worlds-on-fire?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[So let’s just make AI porn.&#xA;&#xA;I’ve been working on a project to simplify the whole AI/LLM mess for a while now. It started as a primer to the people I consult for, a simple guide to the available options they can consider and they could plan accordingly.&#xA;&#xA;The scope snowballed into a wiki-style launchpad that lays out all possible options, features, benchmarks, tools and strategies anyone could use to integrate AI technology into a small business.&#xA;&#xA;There still doesn&#39;t seem to be any comprehensible service that does this. Currently, we only have vague leaderboards and the media screaming “AI! AI! AI! AI! AI! AI! AI!” as if they have a quota to meet.&#xA;&#xA;I decided to pursue this project because I believe I have a unique perspective to offer and saw an opportunity to leverage it towards future opportunities. In my previous work, I encountered Attention Networks while trying to make sense of the mountains of junk data generated during the BIG DATA and Internet of Things craze.&#xA;&#xA;Every consultant on earth would tell executives that BIG DATA was going to be the future currency and that they should not delete a byte of it.&#xA;But those consultants were silent when operations needed answers on exactly how and why.&#xA;“Just trust the unproven process.”&#xA;&#xA;Sound familiar?&#xA;&#xA;!--more--&#xA;&#xA;Long story short, the initial results from our work with TensorFlow were promising, but it was the type of obvious links between related data that you could achieve with any existing tools and rules. However, it all quickly fell apart when we attempted to uncover &#34;deep insights&#34; – links that are not apparent to people or regular tools.&#xA;&#xA;This technology, on its own, was not in the right space at the time to consistently provide value.&#xA;I believe the problem with data models has nothing to do with compute power and data quantity. In fact, it&#39;s actually the type of problem that would likely be made worse by brute forcing excess data and compute into the process.&#xA;&#xA;It&#39;s the eternal problem of defining “good” data.&#xA;&#xA;Complete data is not necessarily perfect data. Having all the data in the world does not guarantee perfect data. There is no such thing as perfect data or a perfect model.&#xA;&#xA;In order to have perfect data, you need perfect processes in a perfect operating environment. Anyone who has been in an operations role, whether in the military, medical field, or as merchants, will all agree on one thing: the real world doesn&#39;t care about rules and expectations. All the real world does is change and ruin your rules and expectations.&#xA;&#xA;Unlike the unpredictable nature of the real world, transformer models can’t change.&#xA;&#xA;They are built on static foundations and rely on three risky assumptions:&#xA;&#xA;The assumption that useful information in the system significantly outweighs misinformation. That is very difficult to judge as an uninformed user.&#xA;&#xA;The assumption that you are asking correct questions, bearing in mind that bad information leads to bad questions – and that results in bad information which ultimately leads to poor decisions.&#xA;&#xA;The assumption that changing metrics and decision-making does not inspire a change in behavior. This is especially pertinent given the fact that these systems allow decision-making without accountability.&#xA;&#xA;What changed?&#xA;&#xA;I&#39;ve been out of the tech industry for a while, and I was excited to see what major breakthrough had happened to make language-based transformer models the foundation of AI.&#xA;&#xA;The whole idea of AI looking like this seemed... counter-intuitive. Because I always considered neural networks and machine learning to be more illustrative and less a definition, but chatGPT seemed to pop out of nowhere.&#xA;&#xA;Boasting actual INTELLIGENCE.&#xA;&#xA;As skeptical as I was, these were the people who had the infinite resources and geniuses to throw at the problem. Judging by the names and dollar amounts associated with the project, the teams’ prior work and rave reviews from educated professionals, it did feel as if there was something real going on.&#xA;&#xA;You can imagine my surprise when I saw that the solution was a combination of everything that got me out of the tech industry.&#xA;Startup bullshit.&#xA;“Fake it till you IPO.”&#xA;It didn&#39;t help that one of the places where I attempted to use data modeling in this way was with a startup that appeared to be more interested in the aesthetics of the prototype dashboard than the accuracy of the information it presented.&#xA;&#xA;The massive breakthrough solution to overcome the AI barrier didn&#39;t appear to be any more advanced than: Ctrl A on the entire internet, copy, paste and make a giant model. (This is obviously a facetious statement. The rational and overly technical thoughts can be found here.)&#xA;&#xA;It didn’t take much capability testing of LLMs for me to spiral into a state of manic conspiracy nut, questioning everything in this industry.&#xA;&#xA;Nothing about it felt right. All the grand promises, the irrational ideas of intelligence and claims of IP theft, along with the fact that they were charging money for this. It’s all distasteful on their own and worthy of scrutiny. However, one of the sticking points for me was the fact that it didn&#39;t really work.&#xA;&#xA;It “could” work... occasionally. Sometimes even impressively. But it did fail catastrophically far too often. And it was often sneaky failures too – able to pass muster and often requiring a forensic breakdown to identify.&#xA;&#xA;It was somehow also the user’s fault for these errors emerging. For not “prompting” correctly. For not burning enough tokens. For not paying for the better model or hardware. For not blindly defending the company and shilling for their software to be adopted and improved. For not understanding the inner workings of a machine without proper enablement, documentation or honest communication.&#xA;&#xA;This is the future big tech seems to want. It made sense after the last twelve years of runaway enshitiflation: Maximum profit, minimum effort, zero accountability. To see it so plainly paraded as an aspirational future made me want to dig into the specifics and to try to communicate them. My scope for this project once again shifted to demystifying what is happening. That neo-liberal fantasy of “just giving people the right information” and trusting that society will magically make the right choices. Trust that only the most rational ideas will emerge in the free market of ideas.&#xA;&#xA;Ideas like the Metaverse. The Blockchain. Windows Vista. Google Barge.&#xA;&#xA;My plan was to avoid overly technical details, because regular people simply don’t care. At no point did I want to mention some abstract future outcome of Artificial Super-Intelligence or doomsday events or a utopia where everything is rosy.&#xA;That could happen next month or in 50 years – but people have bills to pay today. I just wanted to present the simplified facts as they have been observed:&#xA;people, tools, quotes, promises, wins, and failures.&#xA;&#xA;There’s a massive issue that comes from dealing in facts.&#xA;Facts require effort. A lot of effort. Effort to investigate claims and map events and follow an insane quote to a single line in a four-hour stream-of-bullshit podcast and track down sources and survey real user experiences and read papers and just test everything.&#xA;&#xA;The best part is that the crap keeps flowing and it&#39;s impossible to keep up. During the time that it took me to define the AI leaderboards and benchmarks and explain why they hardly matter, a dozen new major claims and four new services emerged. There were also a few thousand layoffs, and a company changed their internal LLM use policy no less than 20 times.&#xA;&#xA;In the time that it took to define a failure of the model (such as a random word test or count to a million), a new model was introduced, followed by a roll-back, multiple updates and apologies.&#xA;&#xA;To top it off, the deeper you looked, the worse it got. It fueled a growing dread that there&#39;s no point to any of this. We all know that logic and reason never win out when people are paid to be unreasonable and peddle nonsense. Or when people are just too desperate and exhausted to put any more effort into rationalizing a bad situation.&#xA;&#xA;You can see it in the way most skeptics of this industry are treated, and you could argue that this is just the way things are. That my critiques may be deemed invalid, because this field is just moving too fast and destined for success.&#xA;&#xA;But what if it isn&#39;t? What if the cost of failure is far worse than the cost of not participating?&#xA;&#xA;It&#39;s the type of questions people didn’t seem willing to entertain. You can dish out every possible fact or informed opinion or financial projection, only to have it fall on deaf ears.&#xA;You can spell out the illegality and hypocrisy, but it won&#39;t matter to the people who need to hear it. You&#39;re just bringing down the “good vibes”.&#xA;&#xA;The line is going up. Money becomes more money. Everything&#39;s good. We’ve just created 20 billion dollars in the last two sentences.&#xA;&#xA;Ignore the uncontrolled truck of TNT flying towards the stack of red barrels with fire symbols on them.&#xA;&#xA;I don&#39;t understand vibes.&#xA;&#xA;After months of spinning tires, I needed to admit that it isn’t possible to complete this side project with the few scattered hours I can spare each month.&#xA;That left me with a choice: I could throw myself into it completely, let it consume all my free time and become even more of a delight to be around, or I could set it aside and unplug from the whole thing. Let the bubble fizzle out, explode, or inflate to infinity, and be happy watching other people get paid to make it happen.&#xA;&#xA;Suffice it to say, I got to have a few very peaceful weeks. It was better for me, my mental health and my family.&#xA;&#xA;Then I checked again and discovered that openAI had decided to do porn.&#xA;&#xA;Not mental health guardrails. Not reliable methods for detecting and preventing hallucinations. Not an SMME (small, medium, micro enterprises) toolkit with industry-specific workflows and official plugins for common software. Not a clear service-level agreement for smaller users, setting quality assurance standards to work against, or long-term price assurances around which one can make strategic business decisions. Not any real value generator for their business or customers.&#xA;&#xA;Porn.&#xA;&#xA;I don&#39;t really care anymore.&#xA;&#xA;Not in the “I don’t care anymore and will expend thousands of words telling you exactly why and how much I don’t care” way. This is so far beyond reason now that I don&#39;t care about being reasonable or coherent. I&#39;m going to embrace my ultimate fate and become that old man yelling at clouds to get off his lawn because I care.&#xA;What I do care about is the slow death of good technology and tech literacy, the fact that people are hurting because of this mess, and that it’s going to be regular people like you and me and our children who are almost certainly going to have to deal with the consequences.&#xA;&#xA;I need to make it clear that I don&#39;t really care about the adult content industry. I don’t judge most of the people who work in that field. My distaste here is specific to LLM companies making these types of business decisions.&#xA;&#xA;At the time, I was sure the porn thing should be bigger news. You can imagine my frustration when most people didn&#39;t even have the energy to give it a second thought, and those who could spare a single thought only did so as a halfhearted obligation.&#xA;Admittedly, it was small news compared to the inbreeding circle-jerk mess of leveraged financing that openAI seems to be cooking up with everyone. It was also announced alongside the release of yet another unwieldy LLM-based browser and a week before earnings calls.&#xA;&#xA;Look at it this way: the company that commands a significant share of the global wealth allocation (i.e., a considerable percentage of most people’s retirement funds) and has access to the most advanced technology they say is known to humans, is now turning to the world’s oldest profession. With so much money and so many people’s lives at stake, it blows my mind that there isn&#39;t a peep about it in any tech or finance media.&#xA;&#xA;Now, I’m hardly the most qualified person to speak on the world’s oldest profession. After all, I am a gamer. I just have a funny feeling…&#xA;&#xA;A feeling that openAI is not doing this in the name of promoting a healthy sex-positive culture that aims to produce ethical adult content, where creators are fairly compensated and consumers are encouraged to develop realistic expectations and appropriate sexual habits, where the art could play a part in couples fostering more fulfilling relationships.&#xA;&#xA;It’s just a hunch.&#xA;&#xA;But what if they are in financial trouble and are just doing porn for the money?&#xA;&#xA;Do people in financially stable positions resort to this sort of business?&#xA;&#xA;People drop out of med school, PhD courses, and even lucrative corporate jobs to pursue a career like this, either because they need the money, or because they believe that this is the only available path to independence.&#xA;It&#39;s likely that a lot more people are making this decision nowadays, given the current state of everything. Very few people are jumping into socially distasteful work if they happen to be rolling in cash. I am willing to bet on it.&#xA;&#xA;And AI companies are rolling in money right?&#xA;It inspires so much confidence when the CEO is asked about revenue, and their answer is all about what the company “will do” and not what “is happening”. It&#39;s perfectly normal for healthy companies to talk about how they definitely don&#39;t want a bailout.&#xA;&#xA;It can&#39;t be overstated, so I will say it again. The leading companies of this new industry are not sustainable by selling their products and services in spite of all the money, tech, access, connections, hype and investor patience at their disposal.&#xA;&#xA;Even NVIDIA won’t be able to sustain chip demand. The useful life of these things has been creeping up due to underutilization, and I can&#39;t imagine that a thriving second-hand market exists.&#xA;&#xA;The main focus at the moment is on fluffing up the stock market by blowing life into stagnant old companies with insane deals that seem to be valuated through the complex forecasting equation: rolling a handful of dice and multiply by a billion.&#xA;&#xA;They also seem intent on retail advertising, because nothing says innovation more than providing the same bad solution of the last decade. They also love getting in bed with rich or powerful sugar daddies, all while encouraging parasocial fan relationships where irrationally attached people can&#39;t help but throw money at them.&#xA;&#xA;Now they want to sell sexual fantasies.&#xA;&#xA;A more important question stemming from the above assessment is this: Do tech CEOs just want to be performers in the porn industry? If so, it would have saved the world a lot of pain to instead allow the super-rich to be their true selves and hook them up with an OnlyFans deal.&#xA;&#xA;It&#39;s a line of work they actually have an aptitude for: Being chronically online, all the crypto scams, selling themselves to the highest bidder, and constantly shilling crap that no one wants. Hell, Elon Musk was faking his Path of Exile 2 game-play to act like &#34;one of the guys&#34;.&#xA;&#xA;It&#39;s hard not to see this situation as if they are a club of pick-me camgirls (I am not going to define that for everyone&#39;s sake).&#xA;&#xA;And this sort of attitude from the industry leaders is reflected in the sorry state of their products.&#xA;&#xA;Chapter bridge:&#xA;&#xA;This is already getting pretty long winded.&#xA;Will keep this crazy train going in a few days with:&#xA;&#xA;The life of AI (Before they fell off and needed to go into porn to pay the bills)]]&gt;</description>
      <content:encoded><![CDATA[<h2 id="so-let-s-just-make-ai-porn" id="so-let-s-just-make-ai-porn">So let’s just make AI porn.</h2>

<p>I’ve been working on a project to simplify the whole AI/LLM mess for a while now. It started as a primer to the people I consult for, a simple guide to the available options they can consider and they could plan accordingly.</p>

<p>The scope snowballed into a wiki-style launchpad that lays out all possible options, features, benchmarks, tools and strategies anyone could use to integrate AI technology into a small business.</p>

<p>There still doesn&#39;t seem to be any comprehensible service that does this. Currently, we only have vague leaderboards and the media screaming “AI! AI! AI! AI! AI! AI! AI!” as if they have a quota to meet.</p>

<p>I decided to pursue this project because I believe I have a unique perspective to offer and saw an opportunity to leverage it towards future opportunities. In my previous work, I encountered Attention Networks while trying to make sense of the mountains of junk data generated during the BIG DATA and Internet of Things craze.</p>

<p>Every consultant on earth would tell executives that BIG DATA was going to be the future currency and that they should not delete a byte of it.
But those consultants were silent when operations needed answers on exactly how and why.
“Just trust the unproven process.”</p>

<p>Sound familiar?</p>



<p>Long story short, the initial results from our work with TensorFlow were promising, but it was the type of obvious links between related data that you could achieve with any existing tools and rules. However, it all quickly fell apart when we attempted to uncover “deep insights” – links that are not apparent to people or regular tools.</p>

<p>This technology, on its own, was not in the right space at the time to consistently provide value.
I believe the problem with data models has nothing to do with compute power and data quantity. In fact, it&#39;s actually the type of problem that would likely be made worse by brute forcing excess data and compute into the process.</p>

<p>It&#39;s the eternal problem of defining “good” data.</p>

<p>Complete data is not necessarily perfect data. Having all the data in the world does not guarantee perfect data. There is no such thing as perfect data or a perfect model.</p>

<p>In order to have perfect data, you need perfect processes in a perfect operating environment. Anyone who has been in an operations role, whether in the military, medical field, or as merchants, will all agree on one thing: the real world doesn&#39;t care about rules and expectations. All the real world does is change and ruin your rules and expectations.</p>

<p>Unlike the unpredictable nature of the real world, transformer models can’t change.</p>

<p>They are built on static foundations and rely on three risky assumptions:</p>

<p>The assumption that useful information in the system significantly outweighs misinformation. That is very difficult to judge as an uninformed user.</p>

<p>The assumption that you are asking correct questions, bearing in mind that bad information leads to bad questions – and that results in bad information which ultimately leads to poor decisions.</p>

<p>The assumption that changing metrics and decision-making does not inspire a change in behavior. This is especially pertinent given the fact that these systems allow decision-making without accountability.</p>

<h3 id="what-changed" id="what-changed">What changed?</h3>

<p>I&#39;ve been out of the tech industry for a while, and I was excited to see what major breakthrough had happened to make language-based transformer models the foundation of AI.</p>

<p>The whole idea of AI looking like this seemed... counter-intuitive. Because I always considered neural networks and machine learning to be more illustrative and less a definition, but chatGPT seemed to pop out of nowhere.</p>

<p>Boasting actual INTELLIGENCE.</p>

<p>As skeptical as I was, these were the people who had the infinite resources and geniuses to throw at the problem. Judging by the names and dollar amounts associated with the project, the teams’ prior work and rave reviews from educated professionals, it did feel as if there was something real going on.</p>

<p>You can imagine my surprise when I saw that the solution was a combination of everything that got me out of the tech industry.
Startup bullshit.
“Fake it till you IPO.”
It didn&#39;t help that one of the places where I attempted to use data modeling in this way was with a startup that appeared to be more interested in the aesthetics of the prototype dashboard than the accuracy of the information it presented.</p>

<p>The massive breakthrough solution to overcome the AI barrier didn&#39;t appear to be any more advanced than: Ctrl A on the entire internet, copy, paste and make a giant model. (This is obviously a facetious statement. The rational and overly technical thoughts can be found <a href="https://write.as/technical-details/technical-1-i-only-said-it-seems-like-ctrl-a-on-the-entire-internet" title="here">here</a>.)</p>

<p>It didn’t take much capability testing of LLMs for me to spiral into a state of manic conspiracy nut, questioning everything in this industry.</p>

<p>Nothing about it felt right. All the grand promises, the irrational ideas of intelligence and claims of IP theft, along with the fact that they were charging money for this. It’s all distasteful on their own and worthy of scrutiny. However, one of the sticking points for me was the fact that it didn&#39;t really work.</p>

<p>It “could” work... occasionally. Sometimes even impressively. But it did fail catastrophically far too often. And it was often sneaky failures too – able to pass muster and often requiring a forensic breakdown to identify.</p>

<p>It was somehow also the user’s fault for these errors emerging. For not “prompting” correctly. For not burning enough tokens. For not paying for the better model or hardware. For not blindly defending the company and shilling for their software to be adopted and improved. For not understanding the inner workings of a machine without proper enablement, documentation or honest communication.</p>

<p>This is the future big tech seems to want. It made sense after the last twelve years of runaway enshitiflation: Maximum profit, minimum effort, zero accountability. To see it so plainly paraded as an aspirational future made me want to dig into the specifics and to try to communicate them. My scope for this project once again shifted to demystifying what is happening. That neo-liberal fantasy of “just giving people the right information” and trusting that society will magically make the right choices. Trust that only the most rational ideas will emerge in the free market of ideas.</p>

<p>Ideas like the Metaverse. The Blockchain. Windows Vista. <a href="https://en.wikipedia.org/wiki/Google_barges">Google Barge</a>.</p>

<p>My plan was to avoid overly technical details, because regular people simply don’t care. At no point did I want to mention some abstract future outcome of Artificial Super-Intelligence or doomsday events or a utopia where everything is rosy.
That could happen next month or in 50 years – but people have bills to pay today. I just wanted to present the simplified facts as they have been observed:
people, tools, quotes, promises, wins, and failures.</p>

<p>There’s a massive issue that comes from dealing in facts.
Facts require effort. A lot of effort. Effort to investigate claims and map events and follow an insane quote to a single line in a four-hour stream-of-bullshit podcast and track down sources and survey real user experiences and read papers and just test everything.</p>

<p>The best part is that the crap keeps flowing and it&#39;s impossible to keep up. During the time that it took me to define the AI leaderboards and benchmarks and explain why they hardly matter, a dozen new major claims and four new services emerged. There were also a few thousand layoffs, and a company changed their internal LLM use policy no less than 20 times.</p>

<p>In the time that it took to define a failure of the model (such as a random word test or count to a million), a new model was introduced, followed by a roll-back, multiple updates and apologies.</p>

<p>To top it off, the deeper you looked, the worse it got. It fueled a growing dread that there&#39;s no point to any of this. We all know that logic and reason never win out when people are paid to be unreasonable and peddle nonsense. Or when people are just too desperate and exhausted to put any more effort into rationalizing a bad situation.</p>

<p>You can see it in the way most skeptics of this industry are treated, and you could argue that this is just the way things are. That my critiques may be deemed invalid, because this field is just moving too fast and destined for success.</p>

<p>But what if it isn&#39;t? What if the cost of failure is far worse than the cost of not participating?</p>

<p>It&#39;s the type of questions people didn’t seem willing to entertain. You can dish out every possible fact or informed opinion or financial projection, only to have it fall on deaf ears.
You can spell out the illegality and hypocrisy, but it won&#39;t matter to the people who need to hear it. You&#39;re just bringing down the “good vibes”.</p>

<p>The line is going up. Money becomes more money. Everything&#39;s good. We’ve just created 20 billion dollars in the last two sentences.</p>

<p>Ignore the uncontrolled truck of TNT flying towards the stack of red barrels with fire symbols on them.</p>

<p>I don&#39;t understand vibes.</p>

<p>After months of spinning tires, I needed to admit that it isn’t possible to complete this side project with the few scattered hours I can spare each month.
That left me with a choice: I could throw myself into it completely, let it consume all my free time and become even more of a delight to be around, or I could set it aside and unplug from the whole thing. Let the bubble fizzle out, explode, or inflate to infinity, and be happy watching other people get paid to make it happen.</p>

<p>Suffice it to say, I got to have a few very peaceful weeks. It was better for me, my mental health and my family.</p>

<p>Then I checked again and discovered that <a href="https://arstechnica.com/ai/2025/02/chatgpt-can-now-write-erotica-as-openai-eases-up-on-ai-paternalism/">openAI had decided to do porn</a>.</p>

<p>Not mental health guardrails. Not reliable methods for detecting and preventing hallucinations. Not an SMME (small, medium, micro enterprises) toolkit with industry-specific workflows and official plugins for common software. Not a clear service-level agreement for smaller users, setting quality assurance standards to work against, or long-term price assurances around which one can make strategic business decisions. Not any real value generator for their business or customers.</p>

<p><a href="https://write.as/technical-details/technical-2-congratulations-on-your-risky-click-of-the-day">Porn</a>.</p>

<h3 id="i-don-t-really-care-anymore" id="i-don-t-really-care-anymore">I don&#39;t really care anymore.</h3>

<p>Not in the “I don’t care anymore and will expend thousands of words telling you exactly why and how much I don’t care” way. This is so far beyond reason now that I don&#39;t care about being reasonable or coherent. I&#39;m going to embrace my ultimate fate and become that old man yelling at clouds to get off his lawn because I care.
What I do care about is the slow death of good technology and tech literacy, the fact that people are hurting because of this mess, and that it’s going to be regular people like you and me and our children who are almost certainly going to have to deal with the consequences.</p>

<p>I need to make it clear that I don&#39;t really care about the adult content industry. I don’t judge most of the people who work in that field. My distaste here is specific to LLM companies making these types of business decisions.</p>

<p>At the time, I was sure the porn thing should be bigger news. You can imagine my frustration when most people didn&#39;t even have the energy to give it a second thought, and those who could spare a single thought only did so as a halfhearted obligation.
Admittedly, it was small news compared to the inbreeding circle-jerk mess of leveraged financing that openAI seems to be cooking up with everyone. It was also announced alongside the release of yet another unwieldy LLM-based browser and a week before earnings calls.</p>

<p>Look at it this way: the company that commands a significant share of the global wealth allocation (i.e., a considerable percentage of most people’s retirement funds) and has access to the most advanced technology they say is known to humans, is now turning to the world’s oldest profession. With so much money and so many people’s lives at stake, it blows my mind that there isn&#39;t a peep about it in any tech or finance media.</p>

<p>Now, I’m hardly the most qualified person to speak on the world’s oldest profession. After all, I am a gamer. I just have a funny feeling…</p>

<p>A feeling that openAI is not doing this in the name of promoting a healthy sex-positive culture that aims to produce ethical adult content, where creators are fairly compensated and consumers are encouraged to develop realistic expectations and appropriate sexual habits, where the art could play a part in couples fostering more fulfilling relationships.</p>

<p>It’s just a hunch.</p>

<p>But what if they are in financial trouble and are just doing porn for the money?</p>

<p>Do people in financially stable positions resort to this sort of business?</p>

<p>People drop out of med school, PhD courses, and even lucrative corporate jobs to pursue a career like this, either because they need the money, or because they believe that this is the only available path to independence.
It&#39;s likely that a lot more people are making this decision nowadays, given the current state of everything. Very few people are jumping into socially distasteful work if they happen to be rolling in cash. I am willing to bet on it.</p>

<p>And AI companies are rolling in money <a href="https://youtube.com/shorts/Kxe4ZjD_wBI">right?</a>
It inspires so much confidence when the CEO is asked about revenue, and their answer is all about what the company “will do” and not what “is happening”. It&#39;s perfectly normal for healthy companies to talk about <a href="https://www.msn.com/en-us/technology/artificial-intelligence/sam-altman-says-no-government-bailout-if-openai-fails/ar-AA1Qa0Wj">how they definitely don&#39;t want a bailout</a>.</p>

<p>It can&#39;t be overstated, so I will say it again. The leading companies of this new industry are not sustainable by selling their products and services in spite of all the money, tech, access, connections, hype and investor patience at their disposal.</p>

<p>Even NVIDIA won’t be able to sustain chip demand. The useful life of these things has been creeping up due to underutilization, and I can&#39;t imagine that a thriving second-hand market exists.</p>

<p>The main focus at the moment is on fluffing up the stock market by blowing life into stagnant old companies with insane deals that seem to be valuated through the complex forecasting equation: rolling a handful of dice and multiply by a billion.</p>

<p>They also seem intent on retail advertising, because nothing says innovation more than providing the same bad solution of the last decade. They also love getting in bed with rich or powerful sugar daddies, all while encouraging parasocial fan relationships where irrationally attached people can&#39;t help but throw money at them.</p>

<p>Now they want to sell sexual fantasies.</p>

<p>A more important question stemming from the above assessment is this: Do tech CEOs just want to be performers in the porn industry? If so, it would have saved the world a lot of pain to instead allow the super-rich to be their true selves and hook them up with an OnlyFans deal.</p>

<p>It&#39;s a line of work they actually have an aptitude for: Being chronically online, all the crypto scams, selling themselves to the highest bidder, and constantly shilling crap that no one wants. Hell, Elon Musk was <a href="https://www.youtube.com/watch?v=qpXu9ft9h4M&amp;pp=ygUOZWxvbiBmYWtlIHBvZTLSBwkJCwoBhyohjO8%3D">faking his Path of Exile 2</a> game-play to act like “one of the guys”.</p>

<p>It&#39;s hard not to see this situation as if they are a club of pick-me camgirls (I am not going to define that for everyone&#39;s sake).</p>

<p>And this sort of attitude from the industry leaders is reflected in the sorry state of their products.</p>

<p>Chapter bridge:</p>

<p>This is already getting pretty long winded.
Will keep this crazy train going in a few days with:</p>

<p><em>The life of AI (Before they fell off and needed to go into porn to pay the bills)</em></p>
]]></content:encoded>
      <guid>https://blog.itstoday.site/the-worlds-on-fire</guid>
      <pubDate>Thu, 27 Nov 2025 15:32:00 +0000</pubDate>
    </item>
  </channel>
</rss>