When AI meets Product: January’25 AI Product Updates
Keeping up to date with new AI models, products, ethics, and trends
Welcome to the January edition of “When AI Meets Product — AI Product Updates”. January has been a wild month, with Deepseek making headlines and shaking up markets. There’s been plenty more happening in the AI world, through! In this edition, we cover the most important updates:
- GenAI Model Updates — Deepseek has dominated the conversation, while other major players focused on expanding integrations and launching new features on top of their models.
- AI Product Updates — Key developments in LLMs for text analysis & clustering, the growing importance of evaluation, the rise of AI agents, and insights into what makes companies succeed with AI.
- AI Ethics and Legislation Updates — New regulations and notable updates in the AI ethics landscape.
- Other resources — A mix of perspectives on the future of AI product management, product-market fit challenges in AI, and key industry trends for 2025.
GenAI Model Updates
DeepSeek Challenges Big Tech
This month, DeepSeek filled the news with its model, DeepSeek R1. It has 671 billion parameters and performs similar to OpenAI’s best public model — but it is claimed it was trained for much less money (only $6 million). It is also open-source, which means anyone can download it, use it in their own systems, or iterate it. Also, it is much cheaper to use compares to OpenAI’s latest model: $0.55 per million input tokens (OpenAI: $15), and $2.19 per million output tokens (OpenAI: $60).
This launch had a huge impact on the stock market. NVIDIA lost $600 billion in value as investors started to question if the U.S. will stay the leader in AI. Many are also wondering if bigger models and huge investments are really necessary for better AI. At the same time, OpenAI is investigating DeepSeek, as they think DeepSeek may have copied OpenAI’s models to train DeepSeek R1 (a process called distillation).
Governments Also Invest in AI
Not only companies are building AI models. The Spanish government launched ALIA, a large language model for Spanish, Catalan, Basque and Galician.
What Comes After Transformers? Titans
Most AI models today use transformers, but what’s next? Google Research introduced a new model type called Titans, which could improve AI efficiency, as it introduces a new module “that learns to memorize historical context and helps attention to attend to the current context while utilizing long past information”.
Big Tech Focuses on New Features
While DeepSeek launched a new model, other AI companies focused on improving their existing AI products this month:
- Microsoft: Added AI agents inside 365 Copilot Chat to improve automation.
- Google: Released Gemini features for all Google Workspace business users.
- OpenAI: Launched two main features, scheduled tasks (enable ChatGPT to run automated prompts and proactively reach out to you on a scheduled basis), and operator (similar to Anthropic’s Computer Use, it is an agent that, using its own browser, can look at a webpage and interact with it by typing, clicking and scrolling).
AI Use Cases & Product Updates
LLMs for Text Analysis & Clustering
Large language models (LLMs) are increasingly being used for text analysis, topic modeling, and clustering.
A great example is Anthropic’s recent document analyzing Claude usage. Beyond insights into how people use the model (coding, writing, research) and language differences, the methodology itself is noteworthy from a data science perspective: ensuring data privacy by summarizing and anonymizing conversations, topic detection, clustering, and hierarchical organization.
Amazon has also applied LLMs to analyze qualitative data, following a similar approach — extracting topics and clustering insights for better usability.
The AI Evaluation Challenge: A Growing Priority
Evaluating AI models is becoming the most important and time-consuming aspect of AI development. Some key resources:
- Paradigm shifts in LLM evaluation — Why evaluation is now more crucial than ever, the need to benchmark, and the role of human review.
- FACTS Grounding (Google DeepMind) — A new benchmark to assess factual accuracy in LLM outputs.
- Evaluation-Driven Development — Chip Huyen introduces, among many other relevant topics around AI Engineering the idea of “evaluation driven development” and why this concept makes sense now. Her github also has a lot of relevant resource.
AI Agents: The Next Big Trend for 2025
AI agents are emerging as the next big trend, but they come with both promise and challenges:
- How Are Companies Using AI Agents? — covering examples like the use of agents for drug discovery, financial analysis, writing code and selling items, ask me anything for employees, or for solving customer issues.
- A product perspective on AI agents — A great introduction covering their potential, risks, and limitations.
- Making websites agent-accessible — As AI agents rise, a new product, Browser Use, is tackling the challenge of making websites more accessible to them.
- An engineering perspective on AI agents — Chip Huyen’s deep dive into the technical aspects of AI agents.
What Makes Companies Succeed with AI?
Several recent reports analyze what sets successful AI adopters apart:
- What Companies Succeeding with AI Do Differently — Identifies as key enablers executive sponsorship, strong partnerships, cross-department communication, and effective data management.
- 2025 playbook for enterprise AI success — Major themes: AI agents as the next automation wave, evaluation as the foundation for reliable AI, cost efficiency as a something to focus on, personalization as a growing opportunity, the balance between inference, and test-time compute.
- Technical Considerations for Business Leaders Operationalizing Gen AI — Key factors covered: choosing the right foundational model (cost vs. capability trade-offs), customization and data as key strategy for differentiation (e.g. through RAG), need for managing risks (security, privacy, responsible AI), and prioritizing business value over technical novelty.
AI Ethics and Legislation Updates
This month, we’ve seen important discussions on AI risks, new laws shaping AI governance, and key updates in AI ethics research.
A Must-Watch Panel: “Unmasking the Future of AI”
I recently watched one of the most insightful conversations on AI’s potential and risks between Joy Buolamwini and Sam Altman. Two very different voices discussing the balance between innovation and responsibility, regulation and self-governance, current vs existential risks, and who should be making the calls.
New AI Legislation: Who’s Moving Forward?
AI regulation is accelerating worldwide, and two major acts were just introduced:
- New texas responsible ai governance act TRAIGA, which aims to place heavy compliance obligations on both the developers and deployers of any “high-risk” AI system “that is a substantial factor to a consequential decision
- Korea’s AI Basic Act, Act on the Development of Artificial Intelligence and Establishment of Trust, aims to both promote the development of AI in the country as well as set the legal basis to prevent related risks.
- Hard to keep up with AI regulations? IAPP (International Association of Privacy Professionals) just launched the Global AI Legislation Tracker.
What is new in the context of AI ethics?
- “Frontier models are capable of in-context scheming”, paper that analyses the behaviors of different models when instructed to pursue goals and are placed in environments that incentivize scheming. It proves this can happen with most models, through different strategies including introduction of subtle mistakes in the response or attempting to disable the system oversight mechanisms.
- AI as Legal Persons: Past, Patterns, and Prospects, covers the challenges from a legal perspective on AI legal personhood.
- Nathan Bos has an interesting post on his updates for the course AI Ethics in 2025. Updates include LLM interpretability, more focus and grounded examples on human-centered AI, law and governance deep dives.
- Human-centered AI is key to ensure AI solutions that prioritize the people who will be using them in a responsible way. This blog post introduces the most relevant frameworks to help develop human-centered AI solutions throughout the different steps of the development and deployment lifecycle.
Other resources
The Future of AI Product Management — Andrew Ng
Andrew Ng wrote an interesting piece about the future of AI Product Manager role: as writing software becomes easier and faster, we should expect an increased demand for AI PMs to help ensure what is built is valuable. Today, many companies have around 6 engineers for every 1 PM (known in industry as two pizza team). But in the future, as engineers become more efficient, we might see this ratio drop to 3:1, so the same number of engineers, divided into smaller teams, and an increase in PM figures. The piece also covers the most important skills for AI PMs: technical and data proficiency, iterative development, managing ambiguity, responsible AI, and curiosity.
What Does an AI Product Manager Do? — My Take
I recently wrote a post introducing the AI PM role, based on my 3 years experience working in this position. The post covers the different types of AI PMs, their main skills, tasks and daily responsibilities, and what are the biggest challenges to face in this position.
Product-Market Fit Collapse & AI — Reforge
AI is accelerating shifts in Product-Market Fit (PMF), meaning companies that once had a solid market position are suddenly at risk. This piece explores how AI-driven changes impacted companies like Chegg and StackOverflow — and what you can do to avoid the same fate: understand how your customer expectations are changing, evaluate the level of risk for PMF collapse, and allocate product portfolio of bets accordingly.
PwC’s AI Predictions for 2025
PwC released a report on AI predictions for 2025: the importance of a good AI strategy, agents entering the workforce, responsible AI being key to differentiate and generate trust, achieving sustainability goals, development lifecycles will be much faster, industries will be transformed.
Wrapping it up
That was it from When AI meets Product — January’25 AI Product Updates. 2025 is already promising to bring us many more exciting progress in generative models, ethical frameworks, and real-world applications. Stay tuned!