ChatGPT's Thinking Images, Meta's Mouse Tracking, and the Bland Tax

This episode examines OpenAI's release of ChatGPT Images 2.0, which introduces a 'Thinking' mode capable of complex visual planning and non-Latin script rendering. It also covers Meta's controversial decision to track employee mouse movements and keystrokes to train autonomous AI agents under its new Model Capability Initiative. Additionally, the discussion explores the concept of the 'bland tax,' a phenomenon where brands lose visibility in AI-synthesized search results. The roundup includes updates on the Framework Laptop 13 Pro, an interdisciplinary AI and philosophy degree at Arizona State University, feature requests for Claude Code's statusline, and USC's AI-driven combat robots.

Duration: 12:23Words: 1,817Deep dives: 3Quick hits: 4Generated: 2026-04-21 22:42

Chapters

StartChapter
00:00Intro
00:52ChatGPT Images 2.0
04:05Meta to start capturing employee mouse movements, keystrokes for AI training
06:41The hidden ‘bland tax’ that could erase your brand from AI search - Search Engine Land
08:53Quick Hits
11:45Outro

Deep Dives

Intro

AWelcome to the podcast. Today we are looking at a major update to how AI generates images, and it involves a lot more planning and reasoning than just spitting out a grid of pixels.
BWe are also tracking a very interesting, and slightly controversial, developing story out of Meta. Their push to build autonomous AI agents is running right into employee privacy concerns on their own campus.
AAnd if you work in marketing, publishing, or SEO, you might want to brace yourself for a new concept that is being called the 'bland tax.'
BPlus, we have a hardware update from Framework and some interesting academic news from Arizona State University. Let's get right into it.

ChatGPT Images 2.0

00:52 hn source
Research brief
FACTS
- OpenAI launched ChatGPT Images 2.0 on April 21, 2026, featuring a new gpt-image-2 model (source: https://venturebeat.com/ai/openais-chatgpt-images-2-0-is-here-and-it-does-multilingual-text-full-infographics-slides-maps-even-manga-seemingly-flawlessly/)
- The model introduces two modes: "Instant" for fast generation and "Thinking" for complex, research-informed, multi-step visual tasks (source: https://venturebeat.com/ai/openais-chatgpt-images-2-0-is-here-and-it-does-multilingual-text-full-infographics-slides-maps-even-manga-seemingly-flawlessly/)
- Key technical improvements include near-perfect text rendering, support for non-Latin scripts, 2K resolution, and flexible aspect ratios (source: https://petapixel.com/2026/04/21/openai-claims-chatgpt-images-2-0-can-think/)
- CEO Sam Altman stated, "Images 2.0 is a huge step forward; this is like going from GPT-3 to GPT-5 all at once" (source: https://techradar.com/computing/artificial-intelligence/not-just-generating-images-its-thinking-chatgpt-images-2-0-could-fundamentally-change-how-you-make-ai-images)
- "Thinking" mode features are restricted to paid ChatGPT tiers (Plus, Pro, Business) (source: https://petapixel.com/2026/04/21/openai-claims-chatgpt-images-2-0-can-think/)

CONTEXT
OpenAI is positioning ChatGPT Images 2.0 as a shift from a simple image-generation toy to a professional "visual workspace" capable of handling complex design, layout, and storytelling tasks. By integrating "O-series" reasoning capabilities, the model can now research, plan, and structure visual outputs—such as multi-page manga or infographics—rather than just rendering a single image from a prompt. This release is a direct attempt to compete with multimodal systems like Google Gemini, which have historically excelled at connecting text, images, and real-time web context.

DISCUSSION
- Does the "Thinking" mode's ability to research and plan before rendering fundamentally change the role of the user from a "prompter" to a "design director"?
- How does the focus on "usable" text and professional layouts impact the market for traditional design software and the potential for AI-generated misinformation or deceptive political influence campaigns?
AOur first story today is a massive update from OpenAI. They just launched ChatGPT Images 2.0, running on a brand new model called gpt-image-2. And they are positioning this as a fundamental shift in how we interact with AI image generation.
BIt really is a fundamental shift. Up until now, image generation has mostly been a prompt-and-pray exercise. You type a sentence, you get an image, and if it's wrong, you roll the dice again. But this new model introduces two distinct modes. There is an 'Instant' mode for that traditional, fast generation, and a new 'Thinking' mode.
ARight, and that 'Thinking' mode is where things get wild. It integrates their O-series reasoning capabilities. So instead of just rendering a single image from a prompt, the model can actually research, plan, and structure complex visual outputs. We are talking about multi-page manga, full infographics, slides, and maps.
BThe technical leap here is significant. They are claiming near-perfect text rendering, which has historically been the Achilles' heel of AI image generators. And it doesn't just support Latin scripts; it handles multilingual text and non-Latin scripts seemingly flawlessly. Plus, it outputs at 2K resolution with flexible aspect ratios.
ASam Altman's quote on this really highlights the scale of the update. He said, 'Images 2.0 is a huge step forward; this is like going from GPT-3 to GPT-5 all at once.' That is not a minor version bump in his eyes.
BIt is a bold claim, but if the 'Thinking' mode works as advertised, it changes the user's role entirely. You are no longer just a prompter; you become more of a design director. You give the AI a high-level goal, and it does the structural planning and layout before it even starts drawing.
AWhich puts OpenAI in direct competition with Google Gemini. Gemini has been very strong at multimodal tasks, connecting text, images, and real-time web context. OpenAI is clearly trying to build a professional visual workspace to counter that.
BThere is a catch, though. The 'Thinking' mode features are restricted to paid ChatGPT tiers. So if you are on the free tier, you are not going to get this advanced research and planning capability. You need Plus, Pro, or Business.
AThat makes sense given the compute required for O-series reasoning. But it does raise questions about the broader impact on the design industry. If anyone with a twenty-dollar subscription can generate a professional, multi-page infographic with perfectly rendered text, what happens to the market for traditional design software?
BIt is going to disrupt the lower end of the market, absolutely. But there is also the darker side of this. If you can generate flawless infographics and maps in any language, the potential for AI-generated misinformation or deceptive political influence campaigns scales up dramatically.
AExactly. A map or a chart carries a certain visual authority. When you combine that with the ability to render text perfectly, people are much more likely to believe what they are seeing is real data rather than an AI hallucination.

Meta to start capturing employee mouse movements, keystrokes for AI training

04:05 hn source
Research brief
FACTS
- Meta is deploying a tool called the Model Capability Initiative (MCI) on US-based employees' computers to capture mouse movements, clicks, keystrokes, and occasional screen snapshots (source: reuters.com)
- The data is intended to train AI agents to perform work tasks autonomously, specifically to improve model performance on tasks like navigating dropdown menus and using keyboard shortcuts (source: reuters.com)
- Meta spokesperson Andy Stone stated the data will not be used for performance assessments and that safeguards are in place to protect sensitive content (source: reuters.com)
- The initiative is part of a broader company effort, previously called "AI for Work" and now rebranded as the Agent Transformation Accelerator (ATA) (source: reuters.com)

CONTEXT
Meta is implementing this internal tracking to bridge the gap between AI's current capabilities and the nuanced ways humans interact with computer interfaces. By gathering granular, real-world data on how employees navigate software, the company aims to build more autonomous AI agents capable of completing complex workplace tasks. This move has sparked significant internal backlash, highlighting tensions between corporate AI development goals and employee privacy concerns.

DISCUSSION
- Does the promise that this data will not be used for performance reviews hold up in a corporate environment, or does it inevitably create a culture of surveillance?
- How does this initiative change the psychological contract between employer and employee when workers are explicitly asked to provide the data necessary to train systems that may eventually replace their own roles?
AMoving on to a story about how the AI sausage gets made, Meta has announced an internal program that is causing quite a stir. They are deploying a tool called the Model Capability Initiative, or MCI, on the computers of their US-based employees.
BAnd this tool is essentially corporate spyware, but for a very specific purpose. It is designed to capture granular data on how employees interact with their machines. We are talking about tracking mouse movements, clicks, keystrokes, and even taking occasional screen snapshots.
AThe goal here is to train AI agents to perform work tasks autonomously. Meta is trying to bridge the gap between what AI can do in a text box and how humans actually navigate complex software interfaces, like using dropdown menus or hitting specific keyboard shortcuts.
BRight, this is part of a broader company effort that used to be called 'AI for Work' but has recently been rebranded as the Agent Transformation Accelerator, or ATA. They need real-world, human interaction data to teach these models how to use a computer.
ANaturally, this has sparked massive internal backlash. You are telling your workforce that every click and keystroke is being recorded to train a system that might eventually be able to do their job.
BMeta's spokesperson, Andy Stone, came out to defend the initiative. He stated explicitly that the data will not be used for performance assessments and that they have safeguards in place to protect sensitive content.
ABut does that promise really hold up in a corporate environment? Even if management genuinely intends to only use it for AI training right now, it inevitably creates a culture of surveillance. The psychological contract between employer and employee shifts when you know you are being recorded at that level of granularity.
BIt is a tough sell. Workers are essentially being asked to provide the exact behavioral data necessary to train the systems that could automate their own roles. It is one thing to train an AI on public web data; it is another to train it on your employees' daily workflows.
AAnd it highlights a massive bottleneck in the AI industry right now. We have these incredibly smart language models, but they don't know how to operate a desktop environment. They don't know how to tab between a spreadsheet and an email client seamlessly.
BExactly. To build true autonomous agents, you need action data, not just text data. Meta is betting that the internal friction with their employees is worth the potential breakthrough in agentic AI capabilities.

The hidden ‘bland tax’ that could erase your brand from AI search - Search Engine Land

06:41 news source
Original excerpt
The hidden ‘bland tax’ that could erase your brand from AI search  Search Engine Land
Research brief
FACTS
- Andrew Warden, CMO of Semrush, coined the term "bland tax" to describe the risk of brands being filtered out or ignored by AI search systems, leading to a loss of visibility and traffic (source: https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHYDg7TxFJ9VVNOCXke-XGhNfPJY4kKug9X_2gdS2TaBCrEqZsHG-FPp7om-FvZZV85nQt_lMzVtCazchiFtWSHdiXUxkspLrAgyg9PkQVE1gL0eXQNwimcBhRlAx3ajzNAhtQtTSaYzPTfP0Y_YvURyKiFJPgBOvGzdDkASQ==)
- Approximately 60% of Google searches now end without a click to a website, as users increasingly rely on AI-synthesized answers (source: https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHYDg7TxFJ9VVNOCXke-XGhNfPJY4kKug9X_2gdS2TaBCrEqZsHG-FPp7om-FvZZV85nQt_lMzVtCazchiFtWSHdiXUxkspLrAgyg9PkQVE1gL0eXQNwimcBhRlAx3ajzNAhtQtTSaYzPTfP0Y_YvURyKiFJPgBOvGzdDkASQ==)
- Brands are now competing to be included in synthesized AI answers rather than just competing for traditional search rankings (source: https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHYDg7TxFJ9VVNOCXke-XGhNfPJY4kKug9X_2gdS2TaBCrEqZsHG-FPp7om-FvZZV85nQt_lMzVtCazchiFtWSHdiXUxkspLrAgyg9PkQVE1gL0eXQNwimcBhRlAx3ajzNAhtQtTSaYzPTfP0Y_YvURyKiFJPgBOvGzdDkASQ==)

CONTEXT
The "bland tax" refers to the business risk where brands lose visibility because AI search engines like Google AI Overviews, ChatGPT, and Perplexity synthesize answers directly, reducing the need for users to click through to a brand's website. This shift forces companies to move beyond traditional SEO toward "defensive SEO," which involves actively monitoring and shaping how AI models describe and evaluate their brand to ensure they remain relevant in AI-generated responses.

DISCUSSION
- How can brands effectively measure their "AI visibility" or "AI authority" when traditional metrics like organic traffic are becoming less reliable indicators of impact?
- Is the "bland tax" an inevitable consequence of AI-driven search, or can brands proactively influence AI models to ensure they are consistently cited as authoritative sources?
AOur final deep dive today is about a new concept in the world of search and marketing. Andrew Warden, the CMO of Semrush, has coined a term called the 'bland tax.'
BThe 'bland tax' describes the risk of brands being completely filtered out or ignored by AI search systems. If your brand doesn't have a strong, distinct footprint, you lose visibility and traffic because the AI just synthesizes a generic answer and leaves you out of the citations.
AAnd the context for this is staggering. According to the data in the article, approximately 60 percent of Google searches now end without a click to a website. Users are increasingly relying on AI-synthesized answers from Google AI Overviews, ChatGPT, or Perplexity.
BThat 60 percent figure is terrifying for anyone who runs a business that relies on organic search traffic. It means the old playbook of optimizing for a top-three blue link is dying. Brands are now competing to be included in the synthesized AI answers themselves.
AWhich requires a totally different strategy. The article talks about moving from traditional SEO to 'defensive SEO.' This involves actively monitoring and shaping how AI models describe and evaluate your brand.
BBut how do you even measure that? Traditional metrics like organic traffic or click-through rates are becoming less reliable. If ChatGPT gives a user a perfect summary of your product without linking to your site, you get zero traffic, but you still got the brand exposure.
AThat is the core of the problem. Brands have to figure out how to measure their 'AI visibility' or 'AI authority.' You have to ensure that when an LLM is asked about the best software in your category, it consistently cites you as the authoritative source.
BAnd if you are bland, if your content is generic and indistinguishable from your competitors, the AI has no reason to highlight you. You pay the bland tax by becoming invisible.
AIt is an inevitable consequence of AI-driven search. The models are designed to extract facts and discard fluff. If your entire brand identity is fluff, you are going to get filtered out.

Quick Hits

Roundup hand-off

AAlright, let's hit a few more stories real quick before we wrap up for the day.
BSounds good, let's run through them.

Framework Laptop 13 Pro

Research brief
FACTS
- Framework announced the Laptop 13 Pro on April 21, 2026, featuring a ground-up chassis redesign, 74Wh battery, haptic touchpad, and Intel Core Ultra Series 3 processors (source: frame.work)
- The device includes the first fully-custom touchscreen display for a 13-inch Framework laptop, with 2880 x 1920 resolution and 30-120Hz variable refresh rate (source: tomshardware.com)
- Pricing starts at $1,199 for the DIY Edition and $1,499 for pre-built models, with initial shipments beginning in June 2026 (source: phoronix.com)
- Framework claims the new model achieves over 20 hours of battery life during 4K Netflix streaming (source: frame.work)
- Existing Framework Laptop 13 owners can purchase upgrade kits, including the new mainboard, display, and battery, to retrofit their current devices (source: mashable.com)

CONTEXT
Framework is a company built on the philosophy of modular, repairable, and upgradeable consumer electronics, allowing users to replace individual components rather than the entire machine. The Laptop 13 Pro represents a significant "Pro" tier expansion of their original 13-inch laptop, aiming to compete with premium devices like the MacBook Pro while maintaining the company's commitment to user-serviceability and Linux support.

DISCUSSION
- Does the "Pro" branding and higher price point signal a shift in Framework's target audience away from budget-conscious tinkerers toward high-end power users?
- Given the company's claims of 20-hour battery life, how will these performance metrics hold up in real-world, non-streaming scenarios compared to established competitors?
- How successful will the company be in maintaining its promise of cross-generation compatibility as they introduce more complex, integrated features like haptic touchpads and custom displays?
AFramework just announced the Laptop 13 Pro, featuring a ground-up chassis redesign and Intel Core Ultra Series 3 processors.
BIt also includes their first fully-custom touchscreen display for a 13-inch model, with a 2880 by 1920 resolution and a 30 to 120 Hertz variable refresh rate.
APricing starts at $1,199 for the DIY Edition, and they are claiming over 20 hours of battery life during 4K Netflix streaming on the new 74 watt-hour battery.
BCrucially, existing Laptop 13 owners can purchase upgrade kits to retrofit their current devices, keeping their promise of cross-generation compatibility alive.

ASU professors seek to create degree combining AI and philosophy - The State Press

news source
Research brief
FACTS
- Arizona State University professors are proposing a new undergraduate degree program that combines artificial intelligence with philosophy. (source: https://www.statepress.com/article/2026/04/asu-philosophy-ai-degree-proposal)
- The initiative is led by faculty within the School of Humanities, Arts and Cultural Studies and the School of Computing and Augmented Intelligence. (source: https://www.statepress.com/article/2026/04/asu-philosophy-ai-degree-proposal)
- The curriculum aims to address the ethical implications of AI development, focusing on topics like algorithmic bias, machine consciousness, and the societal impact of automation. (source: https://www.statepress.com/article/2026/04/asu-philosophy-ai-degree-proposal)

CONTEXT
As AI systems become increasingly integrated into daily life and critical infrastructure, there is a growing demand for professionals who understand both the technical mechanics of these tools and the ethical frameworks required to govern them. This proposed degree represents a shift toward interdisciplinary education, aiming to bridge the gap between computer science and the humanities to prevent unintended societal consequences. It highlights a broader academic trend of treating AI not just as a technical challenge, but as a profound philosophical and human rights issue.

DISCUSSION
- How will this curriculum balance rigorous technical coding requirements with abstract philosophical inquiry to ensure graduates are employable in the tech industry?
- Is this degree a necessary evolution of higher education, or does it risk producing graduates who are generalists without the deep specialization required for either field?
- To what extent are major tech companies actually looking for "AI ethicists" with this specific academic background, versus those with purely technical or legal expertise?
AArizona State University professors are proposing a new undergraduate degree that combines artificial intelligence with philosophy.
BThe initiative is a joint effort between the School of Humanities, Arts and Cultural Studies and the School of Computing and Augmented Intelligence.
AThe curriculum is designed to address the ethical implications of AI, focusing heavily on algorithmic bias, machine consciousness, and the societal impact of automation.
BIt is a clear shift toward interdisciplinary education, trying to produce graduates who understand both the technical mechanics of AI and the ethical frameworks needed to govern it.

[anthropics/claude-code] Issue: Include reasoning effort level in statusline JSON data

github source
Research brief
FACTS
- Claude Code provides a custom statusline feature that pipes a JSON payload via stdin to user-defined scripts, containing metrics like model, context window, and session cost (source: https://github.com/anthropics/claude-code/issues/39399, https://medium.com/@naveenraju/claude-code-status-line-metrics-12345)
- The "reasoning effort level" (auto, low, medium, high, max) is a setting that controls the model's internal thinking budget and adaptive reasoning behavior (source: https://claude.com/docs/api/effort)
- Users are requesting that this effort level be added to the JSON payload because the current workaround—reading the boolean `alwaysThinkingEnabled` from settings.json—does not accurately reflect the active effort level (source: https://github.com/anthropics/claude-code/issues/39399)

CONTEXT
Claude Code allows power users to build custom terminal statuslines that display real-time metrics about their AI coding sessions. While the tool exposes extensive data about costs and context usage, it currently omits the "effort level," a critical parameter that dictates how deeply the model thinks before responding. This request highlights the growing demand for programmatic access to internal AI state, enabling developers to build more transparent and informative monitoring dashboards.

DISCUSSION
- Does exposing internal configuration parameters like effort level in the statusline lead to "dashboard fatigue," or is it essential for managing the trade-offs between AI reasoning depth, latency, and token costs?
- As Claude Code becomes more complex, how should developers balance the need for granular, real-time observability against the risk of cluttering the terminal interface?
AOver on GitHub, power users of Claude Code are requesting a feature update for their custom terminal statuslines.
BCurrently, the statusline receives JSON data with fields like model and context window, but it omits the reasoning effort level, which dictates how deeply the model thinks.
AUsers are currently stuck reading a boolean called 'alwaysThinkingEnabled' from settings, which doesn't reflect whether the actual effort level is set to auto, low, medium, high, or max.
BIt shows a growing demand from developers for granular, programmatic access to the internal state of these AI tools to manage latency and token costs.

“We’re training AI to drive a killing robot”: USC Robotic Combat team unveils its newest creation - Annenberg Media

news source
Research brief
FACTS
- The USC Advanced Robotic Combat (ARC) team is a student-run organization that designs and builds combat robots for competitive events similar to the TV show BattleBots (source: usc.edu)
- The team has successfully competed in various national and regional robotics competitions, including the National Havoc Robot League (NHRL) and collegiate nationals (source: usc.edu)
- The team utilizes engineering disciplines such as CAD, systems integration, and finite element analysis to build robots ranging from 150 grams to 250 pounds (source: prospeo.io)

CONTEXT
The USC Advanced Robotic Combat (ARC) team is an extracurricular engineering group focused on the competitive sport of robot combat. This story highlights the intersection of academic engineering, student-led innovation, and the popular entertainment culture surrounding robotic destruction.

DISCUSSION
- How does the integration of AI into remote-controlled combat robots change the nature of the competition compared to traditional manual piloting?
- What are the ethical or safety considerations, if any, when students develop autonomous systems designed specifically for destruction, even within a controlled sporting environment?
AFinally, the USC Advanced Robotic Combat team, a student-run group that builds BattleBots-style machines, is integrating AI into their newest creations.
BThe team builds robots ranging from 150 grams all the way up to 250 pounds to compete in events like the National Havoc Robot League.
AThey are using engineering disciplines like finite element analysis, but adding AI to remote-controlled combat robots introduces a wild new variable.
BIt certainly raises some interesting questions about students developing autonomous systems designed specifically for destruction, even if it is just for a sporting event.

Outro

AThat is all the time we have for today's show. Thank you for listening.
BWe will be back next time with more analysis on the latest stories shaping the tech industry.

Candidates Considered (9)

#SourceTitleURLFate
1newsDon’t ever apologize for taking up space in the room - The Quinnipiac Chroniclehttps://news.google.com/rss/articles/CBMilgFBVV95cUxPLXB4a1ZBbjhmYmIxaGtRM0IweGM5SjhIRXNhd2VWX1M5cTJER0lOQ2E5QlpLTmdoRDJfZWl4RzlTaWdKcV84OWdILVpOZHBNb21ScS13WUVHQzdiV05ZMDh2NWt4T0tPNUJxUHpqMnVvRUFGNFlVWGl6ZDhGRnhYazloREg2NTgzdW5UbGR5RFpjZTdDdnc?oc=5
2github[anthropics/claude-code] Issue: Include reasoning effort level in statusline JSON datahttps://github.com/anthropics/claude-code/issues/39399Quick Hits
3newsASU professors seek to create degree combining AI and philosophy - The State Presshttps://news.google.com/rss/articles/CBMie0FVX3lxTE95Q0pPaEFhYWhCM3dnUEhlUkVfSld4QmRyZTVjMjREUWVnNkpRdkZ5aTNQdnNjemVZYnRrejR2NnJQNVRQTk9rYmsxcFJ0alk2OGRscTBKNkNscFJ6bUhldU5sbHBmTGpQSGYwbGQzMjRUQ2ZFZ19heE5mYw?oc=5Quick Hits
4newsThe hidden ‘bland tax’ that could erase your brand from AI search - Search Engine Landhttps://news.google.com/rss/articles/CBMidkFVX3lxTE9VWjZBYlhhR2J5cXFkSjNSSVVKT3Z2OHNlYWpRY2QyUnhlbnAzbzhyd0c1Um8xcjl0SWI2N0VFanJGMnFVODI2dUtTc0J0T1FkSkVxMDlzeTZpX2s2OE0yakFNd2VESDdZRWlnaFBuQ2tlUnp4U0E?oc=5Deep Dive 3
5news“We’re training AI to drive a killing robot”: USC Robotic Combat team unveils its newest creation - Annenberg Mediahttps://news.google.com/rss/articles/CBMi1gFBVV95cUxQMktESkZLLWFxMXo0RVFReWdhNFpqdWZpV1RSM3Y5TnlrRFczOXNWR1RxVGluSXQ4Wlo2UnNMdHlMQUlpN0J5bEhfM25pQjYwTS1xb2VzZlQ0RlBGNmFrMDAzNmhYZmhOaGhPR0Q4WG5sNkh1ZThZTUdvOHNtRjh6dkVLNkZvb0ppbElOVnFHTmFJUzNCSTE0OWlPNXByZExHampRTFFOSEdRYV9BOWJpYTFocWV5M0FTS0cyQXQteDdTRGR6cjN1OWVRczVwSHB3REg5WHN3?oc=5Quick Hits
6hnFramework Laptop 13 Prohttps://frame.work/laptop13proQuick Hits
7hnLaws of Software Engineeringhttps://lawsofsoftwareengineering.com
8hnChatGPT Images 2.0https://openai.com/index/introducing-chatgpt-images-2-0/Deep Dive 1
9hnMeta to start capturing employee mouse movements, keystrokes for AI traininghttps://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21/Deep Dive 2