AI Intelligencer-What matters in AI this week

Reuters
03 Jul
AI Intelligencer-What matters in AI this week

By Krystal Hu

July 2 (Reuters) - (Artificial Intelligencer is published every Wednesday. Think your friend or colleague should know about us? Forward this newsletter to them. They can also subscribe here.)

Professionals spend, on average, three hours a day in their inboxes.

That single statistic, which Grammarly CEO Shishir Mehrotra shared with me in my exclusive story on their latest move, is the key to understanding his company’s acquisition of email tool Superhuman.

The vision, he explained, is to build a network of specialized AI agents that can pull data from across your private digital workflow—emails, documents, calendars—to reduce the time you spend searching for information or crafting responses.

This vision of a helpful AI agent, however, isn't just about getting to inbox zero. It's a preview of a much larger, more disruptive shift happening across the entire web. Scroll down for more on that.

Do you experience this shift in your work or daily use of the internet already? Email me here or follow me on LinkedIn to share any feedback, and what you want to read about next in AI. 

* Asia is a formidable force in the AI race. Register to watch the live broadcast of the #ReutersNEXTAsia summit on July 9 to hear from executives and experts on the ground about what digital transformation looks like there.

A NEW INTERNET WITH MORE AI BOTS THAN HUMANS

For decades, the internet worked like this: Google indexed millions of web pages, ranked them and showed them on search results. We’d click through to individual websites—Reuters, the New York Times, Pinterest, Reddit, you name it. Those sites then sold our attention to advertisers, earning more ad dollars or subscription fees for producing high-quality, engaging or unique content you couldn’t get anywhere else.

Now, AI companies are pitching a new way to deliver information: everything you want, inside a chat window. Imagine your chatbot answering any question by scraping info from across the web—without ever having to click back to the original source. That’s what some AI companies are pitching as a more “optimized” web experience, except that the people creating the content will get left behind.

In this new online world, as envisioned by AI companies like OpenAI, navigating the web would be frictionless. Users will no longer bother with clicking links or juggling tabs. Instead, everything happens through chat, while personal AI agents will do the dirty work of browsing the internet, performing tasks, and making decisions like comparing plane tickets on your behalf. So-called “agents” refer to autonomous AI tools that act on a user’s instructions, fetching information and interacting with websites.

The shift is happening fast, according to Cloudflare, a content delivery network that oversees about 20% of web traffic. It started to hear complaints from publishers like news websites about plunging referral traffic in the past few months. The data pointed to one trend: more bot activity, less human visits, and lower ad revenue.

Bots have long been an integral part of the internet—there are good bots that crawl and index websites and help them get discovered and recommended when users search for relevant services or information. Bad bots are usually the ones that could overwhelm websites with traffic to cause crashes.

And then there is a new category of AI bots made for large language models (LLMs). AI companies send them to scrape websites using automated programs to copy vast amounts of online information. The volume of such bot activity has risen 125% in just six months, according to Webflow data.

The first wave of AI data scraping hit books and archives. Now, there’s a push for real-time access, putting content owners on the internet in the crosshairs, because chatbot users want information about both history and current events—and they want it to be accurate without hallucinations.

This demand has sparked a wave of partnerships and lawsuits between AI companies and media companies. OpenAI is signing on more news sources while Perplexity is trying to build out a publisher program that was met with little fanfare. Reddit sued Anthropic over data scraping, even as it inked a $60 million deal with Google to license its content.

AI companies argue that web crawling isn’t illegal. They say they’re optimizing the user experience, and that they’ll try to offer links to the original sources when they aggregate information.

Website owners are experimenting, too. Cloudflare’s new “block or pay” crawler model, launched Tuesday, is a new model that already gained support from dozens of websites, from Condé Nast to Reddit. It’s a novel attempt to charge for the use of content by “per crawl”, although it’s too early to tell whether publishers would be made whole by the loss of human visitors.

CHART OF THE WEEK

Data from Cloudflare reveals how drastically the web has shifted in just six months. The number of pages crawled per visitor referred has risen sharply—especially among AI companies. Anthropic now sends its bot to scrape 60,000 times for every single visitor it refers back to a website.

For site owners who monetize human attention, this presents real challenges. And for those hoping to have their brands or services featured in AI chatbot responses, there's growing pressure to build "bot-friendly" websites—optimized not for humans, but for machines, according to Webflow CEO Linda Tong.

WHAT AI RESEARCHERS ARE READING

A study from MIT Media Lab, “Your Brain on ChatGPT,” digs into what really happens in our heads when we write essays using large language models (LLMs) like ChatGPT, Google Search, or just our own brainpower. The research team recruited university students and split them into three groups: one could only use ChatGPT, another used traditional search engines like Google (no AI answers allowed), and a third had to rely on memory alone.

The findings are striking. Writing without any digital tools led to the strongest and most widespread brain connectivity, especially in regions associated with memory, creativity, and executive function. The “Search Engine” group showed intermediate engagement—more than the LLM group, but less than brain-only—while those using ChatGPT exhibited the weakest neural coupling. In other words, the more we outsource to AI, the less our brains are forced to work.

But the story doesn’t end there. Participants who used LLMs not only had less brain engagement but also struggled to remember or quote from their own essays just minutes after writing. They reported a weaker sense of ownership over their work, and their essays tended to be more homogeneous in style and content. In contrast, those who wrote unaided or used search engines felt more attached to their writing and were better able to recall and accurately quote what they’d written.

Interestingly, when participants switched tools—going from LLM to brain-only or vice versa—the neural patterns didn’t fully reset. Prior reliance on AI seemed to leave a trace, resulting in less coordinated brain effort when writing unaided. The researchers warn that frequent LLM use may lead to an “accumulation of cognitive debt”—a kind of atrophy of the mental muscles needed for deep engagement, memory and authentic authorship.

The takeaway? Use AI tools wisely, but don’t let them do all the thinking for you—or you might find your own voice, and memory, fading into the background.

AI JARGON YOU NEED TO KNOW

Imagine if every device required a unique charging cable. AI has faced a similar challenge, where each external tool—like calendars or email—needed custom-built connections, making it slow and complex.

Introducing the Model Context Protocol $(MCP.AU)$, a new standard from Anthropic that’s gaining traction with major players like OpenAI, Microsoft, and Google. It serves as a universal adapter for AI models, enabling seamless communication with diverse tools and data. This means AIs can better manage tasks, integrate with apps, and access real-time information.

MCP is vital for the rise of autonomous AI agents because it eliminates custom integrations, paving the way for more integrated and helpful AI in our daily lives.

LLM, NLP, RLHF: What's a jargon term you'd like to see defined? Email me and I might feature the suggestion in an upcoming edition.

GRAHIC-The growing inefficiency of seasrch engine bot crawling https://reut.rs/45VCQ6J

(Reporting by Krystal Hu; Editing by Ken Li and Lisa Shumaker)

((krystal.hu@thomsonreuters.com, +1 917-691-1815))

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10