Learn By Doing With Steven

2025 Year-End Analysis: The Divergent Evolutions of Biological and Artificial Intelligence

Happy holidays dear readers! I finally got a few moments to reflect on what I have learnt this year but did not get time to write all the words. But with Gemini, I get something worth reading for you.

Before you read, let me clarify:

  1. This article is assisted by Gemini deepresearch, and human alteration is minimal.
  2. The orignal idea was to compare what the model could generate with my prompt and hand written notes with a freshly written article(which I later abbandoned).
  3. The actual steps taken were, idealization in paper(as shown in previous picture), preparation of prompt, check the report and see what I should add, change a bit, publish.
  4. Turns out the report is very much in line with what has already been on my mind(except, the C++ was studied 10 years ago and my only impression left until today are three words in the picture), although some slight difference and compliments on my podcast(unintentional) are present(I decide to leave them here).
  5. My prompt: Expand these notes to a 2025 wrap up of: what is intelligence, how far has GenAI reached, how these points I mentioned in the notes shaped my understanding of these topics and why some tasks are easy to tackle with gen ai but some are still hard to learn. Take human as example, human compress info in the brain or via genetics, algorithms can hardly be efficient. Brain is hard to encode to neural networks. Biological and transgenerational intelligence are deeper. Human use first principal to design softwares and parallel computation hardwares but in a broad sense they are not agile, like the sequential loop conditional patterns I learned years ago. Real world situations are too compliacated and hard to model, take RL game on plant wheat or raise sheeps with weather uncertainty(wheat destroyed), easy to model with lots of assumptions but hard in reality. Language compression is like, simple words to complicated words, but lost some easy to understand components and increase the cost to decipher. LLM compresses languages to numerical space and increased certainty via computation, not like how people build dictionary, which is more like majority consensus vote of interpretation without rigorous probability computation. Then, after a year watching sequoia a16z ycombinator, AIE summit, lex fridman, dwarkesh Patel, etc, I begin to limit my focus range on: investor view and market trend, coding capability and business side adoption of ai tools, research and benchmarks. There are many other topics in my mind but I start denoise now so I can be more focused on year 2026. In the meantime, my Steven data talk has gathered some ground, will work on deeper conversations in the future and try to talk with more engineers and researchers.

2025 Year-End Analysis: The Divergent Evolutions of Biological and Artificial Intelligence

The Ontological Duality of Modern Intelligence The year 2025 marks a definitive transition in the global understanding of intelligence, moving away from a monolithic view toward a dualistic appreciation of biological and artificial paradigms. Intelligence is increasingly recognized not as a single metric of performance, but as a strategic choice in how information is processed, compressed, and utilized for adaptation. The biological model, refined over eons, prioritizes semantic fidelity and contextual richness, whereas the generative artificial intelligence (GenAI) model, characterized by the rapid scaling of 2023–2024, has moved toward a state of aggressive statistical compression.1 This divergence explains why the "easy" tasks for humans, such as high-stakes decision-making in unstructured environments, remain fundamentally "hard" for machines, while the "hard" tasks of massive data synthesis have become trivial for large language models (LLMs).

Biological intelligence is inherently transgenerational and grounded in physical reality. Human information processing is defined by a semantic-contextual strategy where knowledge is organized into compact categories to balance compression with meaning preservation.1 While humans compress information—for example, grouping a "robin" and a "blue jay" as "birds"—the biological brain preserves essential, fine-grained details known as semantic fidelity.1 In contrast, LLMs follow a strategy that prioritizes information-theoretic efficiency based on Rate-Distortion Theory, achieving a more "optimal" balance by statistical measures but often missing the nuanced internal structures of concepts, such as item typicality or functional roles.1

The current state of intelligence can be analyzed through the lens of efficiency and grounding. LLMs operate within a numerical vector space where certainty is generated via probability, yet they completely lack the direct sensorimotor grounding that connects words to their referents in the world.3 This absence of "felt experience" or conscious comprehension means that while an AI can generate coherent symbols, it lacks the phenomenological aspect of meaning that characterizes human understanding.3 As the industry enters 2026, the focus is shifting from raw model size to bridging this "calibration gap" between internal statistical confidence and external communicative reality.4

The Convergence of Biological and Synthetic Systems The boundaries between artificial and biological systems are blurring as AI moves beyond productivity tools into specialized industry applications. In 2025, Y Combinator and other major investment hubs have signaled a move toward "Vertical AI"—models deeply embedded in healthcare, law, manufacturing, and bioengineering.5 This shift is particularly evident in the convergence of AI and synthetic biology, where deep learning is revolutionizing genomic research, protein design, and the engineering of mammalian genetic programs.7

Biological sequences, such as DNA and proteins, possess a fundamentally different structure from linguistic sequences. Traditional text compression algorithms, designed for linear Markov dependencies, exhibit disappointing performance on biological data.10 The human genome generates approximately 100 gigabytes of data per individual, and by late 2025, global genomic data is projected to reach 40 exabytes, outstripping the growth of traditional computation and storage.8 AI for Genomics has become essential to manage this "data deluge," using specialized architectures like Long Short-Term Memory (LSTM) networks to capture long-range dependencies in genetic sequences.8

Metric

Biological Intelligence (Human Brain)

Artificial Intelligence (2025 SOTA)

Power Consumption

Approximately 20 Watts 12

Up to 50 Megawatts for supercomputers 13

Information Strategy

Contextual/Multifaceted 1

Aggressive Statistical Compression 1

Learning Methodology

Transgenerational/Evolutionary 7

Large-scale Data Training (next-token) 1

Grounding

Direct Sensorimotor 3

Indirect/Probabilistic 3

Typicality Retention

High (Captures nuanced typicality) 1

Low (Struggles with fine-grained nuances) 1

The integration of AI into synthetic biology has moved through phases, evolving from simple discriminative models to generative "AI biological designers" capable of predictively designing cellular functions.7 However, the "incompressible" nature of proteins suggests that biological intelligence is deeper than simple pattern recognition; it is a manifestation of physical and chemical laws that silicon-based neural networks are only beginning to approximate.10

Structural Rigidity and the Evolution of Computation A major insight emerging from 2025 research is the limitations of current software and hardware design "first principles." For decades, software engineering has relied on sequential, loop-conditional patterns (e.g., C++) that, while powerful, are fundamentally non-agile when compared to biological parallel processing.12 The von Neumann bottleneck—the separation of memory and processing that requires constant data transport—has become a physical and economic barrier as AI models grow.13

Neuromorphic computing has emerged as the primary contender to resolve this bottleneck. By mimicking the event-driven and parallel nature of brain networks, neuromorphic chips like Intel’s Loihi 2 and IBM’s NorthPole integrate memory and computation locally at artificial synapses.12 These systems enable "sparse" processing, where computations are only triggered when relevant "spikes" are detected, mirroring biological neurons and drastically reducing energy consumption.12

Architecture Type

Processing Paradigm

Core Advantage (2025)

Key Barrier

Von Neumann (GPU/CPU)

Sequential/Synchronous

Mature ecosystem, high precision

Memory wall/Power limits 13

Neuromorphic (SNN)

Parallel/Asynchronous

Energy efficiency (milliwatt range)

Software standardization 12

Vertical TPU (Google)

Integrated Matrix Ops

Cost efficiency via vertical integration

Proprietary walled-garden 6

In 2025, researchers at the Chinese Academy of Sciences unveiled the "Speck" chip, which consumes only 0.42 milliwatts when idle and requires a fraction of the pre-training data of conventional models.12 This shift toward agile hardware is mirrored in the software world by the rise of "vibe coding," where natural language agents replace traditional sequential loops with high-level orchestrations, although complex system design still requires human judgment to manage trade-offs.15

The Reality Gap: Reinforcement Learning and Physical Complexity The difficulty of applying AI to real-world situations, such as agriculture or robotics, highlights the limitations of assumption-heavy modeling. While Reinforcement Learning (RL) excels in simulated "harvest games," the complexity of actual ecosystems—characterized by weather uncertainty, pest outbreaks, and uneven soil—presents a "reality gap" that is hard to bridge.17 For example, in wheat management, the majority of research is currently focused on remote sensing (satellites and UAVs), yet the transition to truly autonomous decision-making is stalled by the difficulty of modeling unstructured field environments.17

RL-powered machinery, such as harvesters and drones, must navigate dynamic distributions of plants and transient obstacles that cannot be perfectly modeled in a digital twin.17 Furthermore, the lack of high-quality simulations that accurately reflect real-world agricultural scenarios constrains the ability to test and refine RL models comprehensively.18 In the sheep industry, genomic selection suffers when pedigree errors—occurring in 10-20% of cases—reduce the accuracy of genomically-enhanced estimated breeding values (GEBV).20

The 2025 Global Agricultural Productivity (GAP) Report indicates that global productivity growth is plateauing, necessitating a move beyond "input intensification" toward "efficiency optimization" through AI agents.21 Agronomists are calling for new bottom-up spatial scaling approaches that better incorporate local weather and soil variations, as current statistical models rely too heavily on best-case scenarios and fertile soil averages.23

Language Compression and the Dictionary of Meaning The way humans and machines handle language reveals a fundamental tension in information theory. Human language evolution tends to compress simple concepts into more complicated words to increase communication efficiency, but this often loses "easy to understand" components and increases the deciphering cost for newcomers.1 LLMs perform a different kind of compression: they map language to a high-dimensional numerical space where "meaning" is a product of probabilistic co-occurrence.1

This numerical compression differs from how people build dictionaries, which is more akin to a "majority consensus vote" of interpretation rooted in shared social and sensorimotor experience.3 While LLMs broadly align with human conceptual categories, they demonstrate an "aggressive statistical compression" that prioritizes information-theoretic efficiency over the contextual richness found in biological intelligence.1 This strategy enables LLMs to store vast amounts of knowledge efficiently but contributes to their unpredictability, as they lack the multifaceted understanding required to navigate "messy" real-world semantics.1

The Investor View and Market Consolidation The investor landscape in late 2025, as analyzed by Sequoia, a16z, and Y Combinator, has moved from a period of "primordial soup" to one of solidified foundations.6 Capital expenditure (CapEx) is beginning to stabilize as the massive data centers rising across the United States come online.6 This has led to a stabilization of compute prices, effectively creating a "subsidy" from Big Tech that benefits startups and net-new innovation.6

Market trends show that the model race has narrowed to five major finalists: Microsoft/OpenAI, Amazon/Anthropic, Google, Meta, and xAI.6 Each has developed distinct "superpowers" to survive the consolidation:

Google: Achieving full vertical integration with its own chips (TPUs), data centers, and research teams.6

Meta: Leveraging the power of open-source (Llama) to dominate distribution networks.6

Anthropic: Focusing on high-end research talent and specialized tools like Claude Code.6

xAI: Leading in the speed of physical infrastructure deployment.6

Entity

Primary Advantage (2025)

Market Strategy

Sequoia

Long-term ROI focus

Predicting AI Search as the "killer app" 6

a16z

Enterprise Adoption

Tracking "vibe coding" in the workplace 25

Y Combinator

Early-stage specialized AI

Funding synthetic biology and vertical AI niches 5

Big Tech (General)

Infrastructure Control

Moving from data center scramble to project execution 6

Investment is increasingly flowing toward AI Search, which Sequoia predicts will proliferate as specialized engines for white-collar domains, such as Harvey for legal professionals and OpenEvidence for medical practitioners.6 These tools rely on proprietary data and domain-specific intent extraction, which provides a higher barrier to entry than general-purpose chatbots.6

Coding Capability and the Professional Engineering Landscape The impact of GenAI on software engineering (SWE) has been profound, with over 90% of developers now integrating AI coding assistants into their regular practices.15 The market has evolved from simple autocomplete in 2023 to agentic systems capable of autonomous multi-file refactoring and architectural decision-making by late 2025.15

However, benchmarks reveal a striking "capability gap" as tasks increase in complexity. While top models like Claude Opus 4.5 and GPT-5 score high on traditional bug-fixing benchmarks like SWE-bench Verified, they struggle with sustained, multi-file reasoning in evolution-oriented tasks.15

Benchmark

Top Model Performance (2025)

Difficulty Context

SWE-bench Verified

~80.9% (Claude Opus 4.5) 15

Isolated bug patches and issue resolution

SWE-EVO

~21% (GPT-5) 26

Continuous evolution of existing systems

SWE-Bench Pro

<25% (GPT-5) 16

Enterprise-level, contamination-resistant tasks

The "vibe coding" movement, named Collins Dictionary’s Word of the Year for 2025, reflects the mainstream adoption of natural language programming.15 While 30-50% of code in major firms like Microsoft, Coinbase, and Robinhood is now AI-generated, senior developers report that AI's inability to understand business context or perform deep system design remains a critical limitation.15

Research, Benchmarks, and the Intellectual Ecosystem The intellectual discourse surrounding AI in 2025 has been shaped by marathon conversations with luminaries such as Yoshua Bengio, Sam Altman, and Andrej Karpathy on platforms like the Lex Fridman Podcast and the Dwarkesh Podcast.27 These discussions have pivoted from raw scaling laws toward the "soul" of open-source and the future of human-AI collaboration.28

The "DeepSeek moment" of early 2025, which highlighted the low cost of training high-performing reasoning models in China, shook the Silicon Valley consensus and redirected focus toward training efficiency and the democratization of reasoning capabilities.28 Researchers are now prioritizing "reasoning models" that disclose their chain-of-thought, such as DeepSeek-R1, which rivals proprietary American models like OpenAI’s o1 and o3-mini in benchmarks while remaining open-weight.29

Intellectual Platform

Key Theme (2025)

Notable Contributors

Lex Fridman Podcast

Nature of Intelligence and AGI 27

Elon Musk, Sundar Pichai, Terence Tao 28

Dwarkesh Podcast

Research "Whys" and Intellectual Foundations 27

Jeff Dean, Mark Zuckerberg, Dario Amodei 27

Latent Space

Nuts and Bolts of AI Engineering 27

Industry experts, autonomous agent builders

Steven Data Talk

European AI Ecosystem and Long-termism 31

Milan-based data scientists, local community leaders

The emergence of "Steven Data Talk" and other community-driven platforms signals a "denoising" of the AI hype cycle. In Milan and other European hubs, the focus has shifted toward building resilient, localized AI communities that prioritize "slow media" and deep, meaningful conversations over the "involuting" competition of Silicon Valley.31 This approach emphasizes building a personal brand and technical focus range—including investor views, market trends, and coding capability—to prepare for the structural shifts expected in 2026.32

The Horizon of 2026: Consolidation and Contextual Reasoning As the industry concludes 2025, the overarching theme is one of "Execution and Alignment." The primordial enthusiasm of previous years has matured into a disciplined search for real-world utility and ROI. The "hard" side of AI—the subjective, felt experience of understanding and the ability to navigate high-uncertainty physical environments—remains the frontier.3

The transition to 2026 will likely be defined by:

Agent Orchestration: Moving beyond single agents to multi-agent networks governed by orchestrators that can handle long-horizon, autonomous projects without human intervention.33

Neuromorphic Scaling: Scaling small-scale brain-inspired prototypes to enterprise-level systems that can bring AI learning and inference to mobile and edge devices without the energy costs of massive data centers.12

Semantic Realignment: Bridging the information-theoretic efficiency of LLMs with the contextual richness of human conceptual systems to create more reliable and robust AI assistants.1

Denoised Focus: A strategic reduction in cognitive noise, as engineers and researchers limit their focus to high-impact domains like vertical benchmarks, bioengineering, and physical-world RL.5

Biological and transgenerational intelligence continue to offer deeper lessons in efficiency and adaptation. While silicon-based models have mastered the "pixelverse" and the "codeverse," the "realityverse"—the messy, unpredictable, and physically grounded world—remains the ultimate benchmark for the next generation of artificial systems. The 2025 wrap-up confirms that while AI has reached unprecedented heights in symbol manipulation, the journey toward genuine, grounded understanding has only just begun.

IntelligenceAdaptive UtilityInformation Complexity

图片

This ratio, increasingly explored in Rate-Distortion research, remains the guiding principle for both biological evolution and artificial architectural design as the world prepares for 2026.1 The stabilization of infrastructure and the maturation of coding agents provide the platform, but the "soul" of intelligence remains a transgenerational mystery that biological systems have solved, and artificial systems are still learning to encode.

Works cited Why LLMs don't think like you: A look at the compression-meaning ..., accessed December 27, 2025, https://bdtechtalks.com/2025/06/30/llm-compression-meaning-tradeoff/

From Tokens to Thoughts: How LLMs and Humans Trade ..., accessed December 27, 2025, https://openreview.net/forum?id=rkthPeHvAX

Language writ large: LLMs, ChatGPT, meaning, and understanding - PMC - PubMed Central, accessed December 27, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11861094/

(PDF) What large language models know and what people think they know - ResearchGate, accessed December 27, 2025, https://www.researchgate.net/publication/388234257_What_large_language_models_know_and_what_people_think_they_know

Tech Trends from YC Startups 2025: What Founders and Investors Need to Watch - Medium, accessed December 27, 2025, https://medium.com/@sachhsoft/tech-trends-from-yc-startups-2025-what-founders-and-investors-need-to-watch-b478c80ed5a7

AI in 2025: Building Blocks Firmly in Place | Sequoia Capital, accessed December 27, 2025, https://sequoiacap.com/article/ai-in-2025/

The convergence of AI and synthetic biology: the looming deluge - Genetic Engineering and Society Center, accessed December 27, 2025, https://ges.research.ncsu.edu/wp-content/uploads/2025/07/Vindman_Cummings_et-al.-AI-and-synbio-2025.pdf

AI for Genomics: The 2025 Revolution - Lifebit, accessed December 27, 2025, https://lifebit.ai/blog/ai-for-genomics/

Model-guided design of mammalian genetic programs - ResearchGate, accessed December 27, 2025, https://www.researchgate.net/publication/349452013_Model-guided_design_of_mammalian_genetic_programs

(PDF) Protein is Incompressible - ResearchGate, accessed December 27, 2025, https://www.researchgate.net/publication/2549431_Protein_is_Incompressible

SeqCompress: An Algorithm for Biological Sequence Compression. | Request PDF, accessed December 27, 2025, https://www.researchgate.net/publication/265173815_SeqCompress_An_Algorithm_for_Biological_Sequence_Compression

Brain-Inspired Breakthroughs: Neuromorphic Computing Poised to Reshape AI's Future, accessed December 27, 2025, https://markets.financialcontent.com/stocks/article/tokenring-2025-10-27-brain-inspired-breakthroughs-neuromorphic-computing-poised-to-reshape-ais-future

Neuromorphic Computing Roadmap 2025 - Topsector ICT, accessed December 27, 2025, https://digital-holland.nl/assets/images/default/Roadmap-Neuromorphic-Computing-2025-Short-1.0.pdf

Neuromorphic Computing 2025: Current SotA - human / unsupervised, accessed December 27, 2025, https://humanunsupervised.com/papers/neuromorphic_landscape.html

AI Coding Tools Comparison: December 2025 Rankings - Digital Marketing Agency, accessed December 27, 2025, https://www.digitalapplied.com/blog/ai-coding-tools-comparison-december-2025

New SWE-Bench Pro becnchmark (GPT-5 & Claude 4.1 drop from 70%+ to ~23%) - Reddit, accessed December 27, 2025, https://www.reddit.com/r/singularity/comments/1nmzy5p/new_swebench_pro_becnchmark_gpt5_claude_41_drop/

(PDF) Research Status and Development Trends of Deep Reinforcement Learning in the Intelligent Transformation of Agricultural Machinery - ResearchGate, accessed December 27, 2025, https://www.researchgate.net/publication/392411466_Research_Status_and_Development_Trends_of_Deep_Reinforcement_Learning_in_the_Intelligent_Transformation_of_Agricultural_Machinery

The State of Reinforcement Learning in 2025 - DataRoot Labs, accessed December 27, 2025, https://datarootlabs.com/blog/state-of-reinforcement-learning-2025

(PDF) Wheat Production Transition Towards Digital Agriculture Technologies: A Review, accessed December 27, 2025, https://www.researchgate.net/publication/397718657_Wheat_Production_Transition_Towards_Digital_Agriculture_Technologies_A_Review

September 18, 2025 - American Sheep Industry Association, accessed December 27, 2025, https://www.sheepusa.org/newsletter/september-18-2025

2025 GAP Report | Global Agricultural Productivity Initiative at Virginia Tech, accessed December 27, 2025, https://globalagriculturalproductivity.org/2025-gap-report/

2025 GAP Report Launch | Global Agricultural Productivity Initiative at Virginia Tech, accessed December 27, 2025, https://globalagriculturalproductivity.org/2025-gap-report-launch-2/

Agronomists call for new approach to estimate crop yields and gaps, accessed December 27, 2025, https://www.cals.iastate.edu/news/2025/agronomists-call-new-approach-estimate-crop-yields-and-gaps

State of Consumer AI 2025: Product Hits, Misses, and What's Next ..., accessed December 27, 2025, https://a16z.com/state-of-consumer-ai-2025-product-hits-misses-and-whats-next/

The AI Application Spending Report: Where Startup Dollars Really Go, accessed December 27, 2025, https://a16z.com/the-ai-application-spending-report-where-startup-dollars-really-go/

SWE-EVO: Benchmarking Coding Agents in Long-Horizon Software Evolution Scenarios, accessed December 27, 2025, https://arxiv.org/html/2512.18470v2

AI Podcasts Leaderboard: Top Pods for AI Engineers and Founders - Arize AI, accessed December 27, 2025, https://arize.com/ai-podcasts/

Lex Fridman - CoRecursive Podcast, accessed December 27, 2025, https://corecursive.com/rankings/lex-fridman/

Transcript for DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters | Lex Fridman Podcast #459, accessed December 27, 2025, https://lexfridman.com/deepseek-dylan-patel-nathan-lambert-transcript/

Lex Fridman Podcast, accessed December 27, 2025, https://lexfridman.com/podcast/

Steven Data Talk | E13 | Repost of Build Up with Lydia – Navigating AI and Entrepreneurship from, accessed December 27, 2025, https://www.youtube.com/watch?v=OlbA9lFDuIQ

Steven 数据漫谈EP6 with guest Kyla, DukeCEO, Sketch3D.AI - Spotify for Creators, accessed December 27, 2025, https://creators.spotify.com/pod/profile/stevendatatalkcn/episodes/Steven--EP6-with-guest-Kyla--DukeCEO--Sketch3D-AI-e39cegd

AI Agents in 2025: Expectations vs. Reality - IBM, accessed December 27, 2025, https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality

Team Builds Computer Prototype Designed To Make AI More Efficient - UTD News Center, accessed December 27, 2025, https://news.utdallas.edu/science-technology/neuromorphic-computer-2025/

Learn by Doing with Steven 数能生智 | 社媒矩阵

📅 预约交流 | Book a Call 平台 链接 🗓️ Cal.com https://cal.com/stevenwang 📅 Google Calendar https://calendar.app.google/fT6ip6i638AGuP8v5 📬 ChiliPiper https://gmail.chilipiper.com/me/steven-wang ⏰ Calendly https://calendly.com/steven_wang/60min

🌐 主站与文章 | Website & Writing 平台 链接 🧱 Github https://github.com/learnbydoingwithsteven 🌐 Github.io https://learnbydoingwithsteven.github.io/ 🐻 Bear Blog https://learnbydoingwithsteven.bearblog.dev/ ✍️ Substack https://substack.com/@steven923044 📰 LinkedIn Newsletter https://www.linkedin.com/newsletters/7283566848875384833/ 📲 微信公众号 https://mp.weixin.qq.com/s/_UgwPOKp0KFDNQdPSYuWMg

💬 社群矩阵 | Communities 社群 链接 🌐 LinkedIn Group(中/欧/美AI社群) https://www.linkedin.com/groups/15054015 💬 Discord(中欧美AI社群) https://discord.gg/XE6WpAfM 💬 Discord(Learn By Doing) https://discord.gg/47yq8KcC 📡 Telegram Group(技术·播客) https://t.me/+i9NRjGCKjRQxMDNk 📱 WhatsApp Group(技术·播客) https://chat.whatsapp.com/Gmfju4artZB0VfRxV93H8p

🧑‍💼 直接联系 | Direct Contacts 项目 Email Career / Banking / Finance / GenAI / DataScience / Industry / Consulting and Collaborations wjbear2020@gmail.com

💰 支持创作 | Support Steven’s Work 平台 链接 💳 Paypal https://www.paypal.com/paypalme/wangjiansuper?country.x=IT&locale.x=en_US ☕ Buy Me A Coffee https://buymeacoffee.com/learnbydoing

🎥 视频矩阵 | Video 平台 链接 ▶️ YouTube Learn By Doing With Steven https://www.youtube.com/@learnbydoingwithsteven ▶️ 数能生智(中文频道) https://www.youtube.com/@%E6%95%B0%E8%83%BD%E7%94%9F%E6%99%BA 🎵 TikTok https://www.tiktok.com/@learnbydoingwithsteven 📺 哔哩哔哩(Steven数据漫谈)

https://space.bilibili.com/3546784399886498?spm_id_from=333.788.upinfo.head.click 🎵 抖音 https://www.douyin.com/user/self?modal_id=7577406098097933622&showTab=post

🌍 网站 / 频道 / 平台 | Website 平台 链接 WhatsApp Channel https://whatsapp.com/channel/0029VazqfKFK0IBoyfgyO70b Telegram Channel https://t.me/learnbydoingwithsteven Github https://github.com/learnbydoingwithsteven Github.io https://learnbydoingwithsteven.github.io/ LinkedIn Newsletter(Business) https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7359504834644926464 LinkedIn Newsletter(Tech) https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7283566848875384833 Substack https://substack.com/@steven923044 Bear Blog https://learnbydoingwithsteven.bearblog.dev/ 微信公众号 世界模型的下一站:李飞飞团队World Labs如何定义“空间智能”新赛道?

📱 社交平台 | Social

平台 链接 小红书 https://www.xiaohongshu.com/user/profile/5e0e450b0000000001001e26?xsec_token=YBolHet_ed8Npv1I7yr4lMGb6VRZWtbkE9QSkodxdCu0I=&xsec_source=app_share&xhsshare=CopyLink&appuid=5e0e450b0000000001001e26&apptime=1737132065&share_id=c4262bd995c34cebaab2e0d85e5a3875 LinkedIn(独立项目) https://www.linkedin.com/in/steven-w-6828a31bb/ LinkedIn(完整职业) https://www.linkedin.com/in/jian-w-83bb36440/ X(Twitter) https://x.com/Catchingtides Facebook https://www.facebook.com/profile.php?id=61571798756202 Instagram learnbydoingwithsteven

🤝 合作伙伴 | Collaborations

合作方 链接 Vanta Tech Lab https://www.linkedin.com/company/vantatechlab DukeCEO https://www.linkedin.com/company/chinese-entrepreneurs-organization-at-duke-dukeceo 创·见 Founder Vision https://www.xiaoyuzhoufm.com/podcast/66a322470736bb4045362844?s=eyJ1IjoiNjVkZDlhNDBlZGNlNjcxMDRhOThhNjZkIiwiZCI6MX0= 我作为嘉宾的节目 https://www.xiaoyuzhoufm.com/episode/690c8ad5d99642be96c4accc

🚗 兴趣爱好 | Hobby

内容 链接 Steven On The Road (YouTube) https://youtube.com/@stevenontheroad6129?si=pAUvAm0af6eFJrDn Steven On The Road(哔哩哔哩) https://space.bilibili.com/157133040

Learn by Doing with Steven 数能生智 | 播客矩阵

  1. Steven Data Talk

“Steven Data Talk” — delivering the clearest conversations on cutting-edge AI, technology, innovation, business, and entrepreneurship.

Now available on Spotify, Apple Podcasts, YouTube, Amazon Music, Xiaoyuzhou, Ximalaya, and more.

Support the creator & explore all links:

https://linktr.ee/learnbydoingwithsteven

平台 链接 Apple Podcasts https://podcasts.apple.com/gb/podcast/steven-data-talk/id1845702474 Spotify https://open.spotify.com/show/3qSV5WJBsHbivqdmIopEYR?si=Q7XxCzxsSTKXmKiW27hmAA 喜马拉雅 https://www.ximalaya.com/album/88884765 小宇宙 https://www.xiaoyuzhoufm.com/podcast/68ef81ce0a78e59c5c5c45e7 YouTube Music https://music.youtube.com/playlist?list=PLfV0OO4XXVBk1oCeZg-xwdnYbNuSDqgmW&si=EQtgt96FfSZSwxvc Amazon Music https://music.amazon.com/podcasts/b31ecf00-32e8-41b5-96cd-13e86253d249/steven-data-talk

  1. Steven 数据漫谈

《Steven数据漫谈》——用最清晰的方式,聊最前沿的人工智能、科技、创新、商业、创业思考。现已登陆 Spotify、Apple Podcasts、YouTube、Amazon Music、小宇宙、喜马拉雅 等平台。

所有链接,支持作者:https://linktr.ee/learnbydoingwithsteven

平台 链接 Apple Podcasts https://podcasts.apple.com/gb/podcast/steven%E6%95%B0%E6%8D%AE%E6%BC%AB%E8%B0%88/id1845703144 Spotify https://open.spotify.com/show/4b8dqmQmVQiPPxuIZNR58w?si=QgCsksYYSV-jTqz1e4tFpw 喜马拉雅 https://www.ximalaya.com/album/89574928 小宇宙 https://www.xiaoyuzhoufm.com/podcast/68ef81d14ce3619b345a32b2 YouTube Music https://music.youtube.com/playlist?list=PLfV0OO4XXVBmQOOLxMpZXn519_PW3uneG&si=rgCVgHICnqNK5rFX Amazon Music https://music.amazon.com/podcasts/6d68d8c4-d7bb-4c1d-8b6c-1e9b0946463d/steven%E6%95%B0%E6%8D%AE%E6%BC%AB%E8%B0%88

  1. Steven AI Talk(多语)

“Steven AI Talk” — delivering the clearest conversations on cutting-edge AI, technology, innovation, business, and entrepreneurship with AI summarizations on various high quality source contents.

🔗 Support the Creator & Access All Links

⁠https://linktr.ee/learnbydoingwithsteven⁠

语言 平台 链接 English Apple Podcasts https://podcasts.apple.com/gb/podcast/steven-ai-talk-english/id1846320778 English Spotify https://open.spotify.com/show/43CVIH13u3pvIyg9aTEHwY?si=w-gPlNheRmCTfvlIPOwL9w English Amazon Music https://music.amazon.com/podcasts/7aaf0f86-7cb1-4f6f-bba3-6fd3cb9dcad2/steven-ai-talkenglish English 小宇宙 https://www.xiaoyuzhoufm.com/podcast/68ef7ec2332567e348b6e57b English YouTube Music https://music.youtube.com/playlist?list=PLfV0OO4XXVBk811V6mTVbL483S_56ZtF5&si=qqGM8Es0NTSFtflX 中文 Spotify(Steven AI 播客) https://open.spotify.com/show/7gLoHfOKO302yNcF7bzNOu?si=FU7xcKwUQU-jfTXxi0grBg 中文 喜马拉雅 https://www.ximalaya.com/album/88276097 Italiano Spotify(Italiano) https://open.spotify.com/show/7D3BcWR5xGzGap8A1bSeoQ?si=ek8GNzrnT-OdiGJ91vi0rw

  1. YC 斯坦福创业课 2015 CS183B 精讲

平台 链接 Apple Podcasts https://podcasts.apple.com/us/podcast/yc%E6%96%AF%E5%9D%A6%E7%A6%8F%E5%88%9B%E4%B8%9A%E8%AF%BE2015cs183b%E7%B2%BE%E8%AE%B2/id1846320657 Spotify https://open.spotify.com/show/5dg2pUoVlwvWCu2RSRYUay?si=esrJ8TByS-Cqrw4aOqKDyw 喜马拉雅 https://m.ximalaya.com/album/109171033?from=pc 小宇宙 https://www.xiaoyuzhoufm.com/podcast/68ef7ec662e8bfe0dffdd116 YouTube Music https://music.youtube.com/playlist?list=PLfV0OO4XXVBlRgHAHArWBcpbOYykZfMqI&si=KwnQAp25yoprI39H Amazon Music https://music.amazon.com/podcasts/97f55ca2-d30d-48d8-afa5-9d51362bf92c/yc%E6%96%AF%E5%9D%A6%E7%A6%8F%E5%88%9B%E4%B8%9A%E8%AF%BE2015cs183b%E7%B2%BE%E8%AE%B2

通用免责声明Disclaimer https://mp.weixin.qq.com/s?__biz=MzI4NTI0NjM3Mg==&mid=2247491914&idx=1&sn=8ebb1c0cc9be1ed09d3b8b25eba15bd2&scene=21&poc_token=HMlDUGmjva5eDgrGA11VnqjyFV02r_7D3cFvSGVK