I keep up with the AI landscape via AI-generated podcast episodes using the following workflow
Anything you're interested in. If you're curious my method: mostly the TL;DR mailing list, and they have other topics than AI. Eventually I'll automate topic-based content-generation in /tts.
Lately I'm deep-diving agentic workflows & Claude Code subtopics. I'm exploring the Lang family (LangChain, LangGraph, LangSmith); N8N; and various Claude Code topics like Ralph, Ralph, Skills, Beads, etc.
This part's important: use Deep Research, rather than basic chat. Gemini, ChatGPT, Claude, Perplexity - they all have one.
If you don't have a subscription, use Gemini, it's free. I personally favor ChatGPT's, but my opinion changes all the time. They're all good, and always leap-frogging.
Each AI tool has a "Deep Research" or "Research" tool. In the chat area, click the + icon, or the Tools section or such.
Replace [TOPIC] with whatever you're learning: "transformer attention mechanisms", "RLHF in language models", etc.
I want to deeply understand [TOPIC]. Write a comprehensive educational guide covering:
- Core concepts and fundamentals
- Historical context and why this matters
- How it works technically (at an intermediate level)
- Key papers, breakthroughs, or milestones
- Current state-of-the-art and open problems
- Practical applications and real-world examples
Write for someone with a technical background who is new to this specific topic.
Be thorough but avoid unnecessary jargon. Aim for 3000-5000 words.
If you don't want to use my tool, I recommend ElevenLabs GenFM. ElevenLabs has near-perfect voice realism. I'd personally stop your search there, I went down the Speechify vs Natural Reader vs ... rabbithole, and Eleven wins all day every day. I compare models here (my tool uses Kokoro by default; Qwen3-TTS for voice-cloning)
Pay to unlock voices. Then in the voices list you'll find Tyler! Generation costs more credits, since Qwen is more compute-intensive than Kokoro. But the output quality is better, and hey, it's me!