Welcome to today’s AI Boost!

Today brings a wild takeover bid and a big leap in model capacity. Perplexity says it wants to buy Google’s Chrome, Anthropic expands Claude Sonnet 4 to a one-million-token context window, and mobile “AI companion” apps are on a nine-figure revenue pace. YouTube begins testing AI age checks, OpenAI restores a detailed model picker in ChatGPT, and Liquid AI ships tiny vision-language models that can run on phones.

Wall Street has Bloomberg. You have Stocks & Income.

Why spend $25K on a Bloomberg Terminal when 5 minutes reading Stocks & Income gives you institutional-quality insights?

We deliver breaking market news, key data, AI-driven stock picks, and actionable trends—for free.

Subscribe for free and take the first step towards growing your passive income streams and your net worth today.

Stocks & Income is for informational purposes only and is not intended to be used as investment advice. Do your own research.

1. Perplexity makes a surprise $34.5B offer for Chrome

Perplexity submitted an unsolicited all-cash $34.5 billion bid to acquire Google’s Chrome browser, positioning itself as a potential buyer if a U.S. court forces Google to divest Chrome after an antitrust ruling. The startup says outside funds would finance the deal, promises to keep Chromium open source, and pledges investment while retaining Google as the default search engine. Chrome has billions of users, which makes the move both strategic and highly unlikely without a legal remedy, but it signals how fiercely AI search rivals want browser distribution.

2. Claude Sonnet 4 now supports a 1M-token context

Anthropic lifted Sonnet 4’s context window to one million tokens, a five-times increase that lets teams load large codebases or hundreds of documents into a single prompt. The long-context beta is available on the Anthropic API and Amazon Bedrock, with Vertex AI support coming soon. Pricing remains the same up to 200K tokens, then doubles for longer prompts, and Anthropic recommends prompt caching and batch processing to cut cost and latency. Customer examples include Bolt.new and iGent AI using the bigger window for multi-day coding sessions and agent workflows.

3. AI companion apps are on pace for $120M in 2025

New Appfigures data shared shows 337 revenue-generating companion apps have brought in $82 million in the first half and are tracking to more than $120 million for the year. Downloads reached 60 million in H1, up 88 percent year over year, for a lifetime total of 220 million. The market is top-heavy since the top 10 percent of apps capture 89 percent of spending, and about 33 titles have cleared $1 million in lifetime consumer revenue. Popular categories tilt toward relationship-style companions like Replika and Character.AI.

4. YouTube tests AI age verification in the U.S.

YouTube will pilot an AI system that estimates whether a logged-in viewer is under 18 based on viewing patterns rather than the birthday on their account. If flagged as a minor, the service will limit recommendations, block personalized ads, and turn on teen safeguards like screen-time reminders. The test starts with a small U.S. cohort and could expand if accuracy mirrors results from other regions, with appeals allowed via ID, credit card, or selfie. The trial comes as pressure grows for stronger online age checks.

5. ChatGPT’s model picker returns with new modes

After a bumpy GPT-5 launch, OpenAI re-introduced a visible model picker that adds Auto, Fast, and Thinking options for GPT-5 so users can prioritize speed or deeper reasoning. Paid users can again select legacy models like GPT-4o and o3 from settings, which acknowledges that the router did not satisfy all use cases. Leadership says the team is iterating quickly and plans more personality and customization controls as feedback rolls in from the rollout.

6. Liquid AI brings small, fast vision-language models to phones

Liquid AI released LFM2-VL in two sizes, roughly 450 million and 1.6 billion parameters, designed for on-device multimodal tasks like reading images and answering questions. The models use a SigLIP2-based encoder and efficient patching to process 512×512 images natively, with up to 2× GPU inference speed versus comparable VLMs while staying competitive on benchmarks like RealWorldQA, InfoVQA, and OCRBench. Weights are on Hugging Face with a custom license and example Colabs so developers can try them on a single GPU or modern smartphones.

Turn AI from Party Trick to Profit Machine

I built Infinity for one goal: hand you the same AI systems and automations I lean on every day.

Plug them in, cut the busy work, and watch new revenue pop up.

You’ll quickly become the go to AI expert everyone trusts.

Ready to plug in?

How would you rate today's newsletter?

Vote below to help us improve the newsletter for you.

Login or Subscribe to participate

Stay tuned for more updates, and have a fantastic day!

Cheers,
Zephyr

Keep Reading

No posts found