ADOS Paris 2025 will showcase interactive AI art at Artifex Lab on March 28-29. The exhibition will feature works created with Daydream, Livepeer's real-time AI video tool.
The event follows Christie's recent AI art auction Augmented Intelligence, which highlighted works from prominent digital artists including Refik Anadol, Sougwen Chung, and Gene Kogan.
- Interactive demonstrations of real-time AI video creation
- Opportunity to explore the intersection of AI and art
- Limited spots available
Livepeer Advisory Boards Release Strategic Recommendations for 2025
Livepeer's Advisory Boards have unveiled their tactical recommendations for network development through 2025. The plan focuses on four key areas: - **Governance**: Implementing faster upgrade processes and accountability systems - **Markets**: Optimizing token economics and treasury management - **Growth**: Focusing on AI video integration and strategic partnerships - **Network**: Scaling GPU compute and improving developer tools The recommendations represent months of collaborative work and set clear, measurable objectives for the network's evolution. Next steps include roadmap design, funding allocation, and execution. [View detailed recommendations](https://forum.livepeer.org/t/advisory-boards-phase-2-tactical-recommendations-2025/3025)
Livepeer Shifts Focus to Real-time AI Video Development

Livepeer announces strategic focus on real-time AI video development, marking significant growth in Q2 2025: - Network hits new ATHs in fees and usage minutes - Q2 sees 21% increase in AI fees, reaching $115K total - Staking participation exceeds 50% Key initiatives include: - Building developer tools - Community growth programs - Network demand acceleration The Livepeer Foundation will oversee long-term decentralized ecosystem development. Projects like @DaydreamLiveAI showcase the platform's real-time video AI capabilities, offering open-source community-driven solutions. [Read the full announcement](https://blog.livepeer.org/livepeer-incorporated-and-realtime-ai/)
Daydream Launches Real-Time Video AI Platform on Livepeer
**Daydream**, a new open-source platform for real-time video-to-video AI transformations, has launched on Livepeer's decentralized infrastructure. The platform enables real-time AI video processing and is now live at [daydream.live](https://daydream.live). Key features: - Community-powered development - Open source architecture - Built on Livepeer's decentralized network - Real-time video AI capabilities Livepeer is supporting this initiative through delegator workshops and network infrastructure support. *Try the early preview to explore real-time AI video transformations.*
Livepeer Fireside: AI Video Innovation Updates
Peter from AI SPE discusses the evolution of real-time AI video on Livepeer in the latest Fireside episode. Key developments include: - Transition to Bring Your Own Container (BYOC) architecture - Introduction of ComfyStream for customizable AI pipelines - New community-driven development approach This follows recent episodes featuring UFO's decentralized radio integration and earlier discussions with Streamplace, ORIGIN STRIES, and Eliza agents about AI-powered video innovations. [Watch the full episode](https://youtu.be/wFSs8kfvsYA)
Livepeer Unveils Real-time AI Video Vision
Livepeer cofounder Doug Petkanics outlines the network's expansion into real-time AI video processing. The initiative promises to enable live video manipulation as simple as typing ChatGPT prompts. Key developments: - Network aims to provide ultra-low latency, heavy compute, and global reach - New features include AI VTubers, interactive livestreams, and real-time translation - Daydream platform launches as proof-of-concept - Treasury funding supports development of new applications The network expects significant increase in usage fees for node operators and delegators through AI video processing capabilities. [Try Daydream](https://daydream.live/) [Read Full Article](https://blog.livepeer.org/daydream-live-a-glimpse-into-the-future-of-realtime-ai-video-on-livepeer/)