Shocking Results from Manim and Antigravity
Shocking; Antigravity with Gemini3 make creating Manim math drawings or teaching videos as easy as turning over your hand; Happy Valentine’s Day
All videos available in my youtube channel. learn by doing with steven.
I believe everyone has watched 1blue3brown’s videos, including explanations of the internal mechanisms of large language models, explanations of the Fourier transform, and so on. They are easy to understand and richly illustrated. Among them, the library he often uses is Manim.
Thanks for reading! Subscribe for free to receive new posts and support my work.
As early as when the vibe coding trend had just begun, I had tried to let models write Manim code. But the results were all very simple, disorganized videos. Not to mention using the Fourier transform to draw. So learning Manim, or taking shortcuts with Manim, was put aside.
But today, inspired by a certain video on TikTok, I used a vibe coding approach combined with Manim to create a heart video (key point: pure prompts!), and then went on to make several other drawing and teaching videos.
The whole video production process was simply prompt words, using antigravity + gemini3 (high), handing it to the model, done in one go. (I haven’t tried Trae yet. I generally don’t use trae for complex applications; I always feel configurations like tool calls sometimes lack coherence, even though the model can also choose gemini. But the cost-performance ratio is good.)
Below are some examples of generated videos.
Draw the map of China.
Draw a dinosaur.
Draw a rose.
Draw a gift.
Draw a heart – and by the way, wish everyone a Happy Valentine’s Day.
(video links deleted since for the past 48 hours the publishing of this article was unsuccessful. they are in youtube - learn by doing with steven)
I know you want to say that except for the heart, the other videos are just so-so. But you have to know that these are made using the Fourier transform (I’m putting it simply here). The model still has to figure out the mathematical formulas and how to use Manim. Below is the code for drawing the map of China.
from manim import *
import numpy as np
class FourierChina(Scene):
def construct(self):
# 1. Define China Map Shape Path (Key Vertices)
# Simplified coordinates approximating the “Rooster” shape
# Scale factor
S = 0.8
# Approximate path (Counter-Clockwise starting from Northeast)
key_points = [
complex(3.0, 2.0), # Heilongjiang (North East tip)
complex(3.2, 1.5),
complex(2.8, 1.0), # Jilin/Liaoning
complex(2.5, 0.5), # Bohai Sea coast
complex(2.8, 0.2), # Shandong Peninsula
complex(2.5, 0.0), # Jiangsu coast
complex(2.6, -0.5), # Shanghai/Zhejiang
complex(2.2, -1.0), # Fujian
complex(2.0, -1.5), # Guangdong
complex(1.5, -1.8), # Hainan (draw as connected for simplicity or skip) - let’s skip for single loop
complex(1.0, -1.5), # Guangxi
complex(0.0, -1.2), # Yunnan
complex(-0.5, -0.8),
complex(-1.5, -0.5), # Tibet South
complex(-2.5, 0.0), # Tibet West
complex(-3.0, 1.0), # Xinjiang West
complex(-2.0, 2.5), # Xinjiang North
complex(-1.0, 2.2), # Gansu/Inner Mongolia West
complex(0.0, 2.0), # Inner Mongolia North
complex(1.0, 2.2),
complex(2.0, 2.5), # Inner Mongolia East
complex(3.0, 2.0) # Back to Start
]
# 2. Resample logic to ensure uniform speed
def get_sampled_points(points, num_samples):
# Calculate total length
total_dist = 0
dists = []
for i in range(len(points)-1):
d = abs(points[i+1] - points[i])
dists.append(d)
total_dist += d
sampled = []
current_dist = 0
# We want uniform steps
step = total_dist / num_samples
curr_idx = 0
for i in range(num_samples):
target_dist = i * step
# Advance to correct segment
while current_dist + dists[curr_idx] < target_dist and curr_idx < len(dists)-1:
current_dist += dists[curr_idx]
curr_idx += 1
# Interpolate in current segment
segment_len = dists[curr_idx]
if segment_len == 0:
local_t = 0
else:
local_t = (target_dist - current_dist) / segment_len
p1 = points[curr_idx]
p2 = points[curr_idx+1]
pt = p1 + (p2 - p1) * local_t
sampled.append(pt)
return sampled
# 3. Generate Fourier Coefficients
N = 200
complex_points = get_sampled_points(key_points, N)
dft = np.fft.fft(complex_points)
indices = list(range(N))
indices.sort(key=lambda k: abs(dft[k]), reverse=True)
SCALE = 0.8 # Adjust scale
# Prepare components
components = []
for k in indices:
freq = k
if k > N/2:
freq = k - N
amplitude = dft[k] / N
radius = abs(amplitude)
phase = np.angle(amplitude)
components.append({
‘freq’: freq,
‘radius’: radius * SCALE,
‘phase’: phase
})
# Manim Scene Setup
time_tracker = ValueTracker(0)
epicycles = VGroup()
epicycle_mobjects = []
path = VMobject()
path.set_color(RED) # China Red
# Filter very small ones
min_radius = 0.005 * SCALE
for comp in components:
radius = comp[’radius’]
if radius < min_radius:
continue
circle = Circle(radius=radius, color=WHITE, stroke_opacity=0.3, stroke_width=1)
vector = Line(start=ORIGIN, end=RIGHT * radius, color=WHITE, stroke_width=1, stroke_opacity=0.5)
epicycles.add(circle, vector)
epicycle_mobjects.append({’circle’: circle, ‘vector’: vector, ‘data’: comp})
pen = Dot(radius=0.05, color=YELLOW)
start_pos = complex_points[0] # Approx start
self.add(epicycles, path, pen)
def get_pos(t):
curr_pos = np.array([0.0, 0.0, 0.0])
for obj in epicycle_mobjects:
data = obj[’data’]
radius = data[’radius’]
freq = data[’freq’]
phase = data[’phase’]
angle = freq * t + phase
dx = radius * np.cos(angle)
dy = radius * np.sin(angle)
v = np.array([dx, dy, 0])
curr_pos = curr_pos + v
return curr_pos
# Initialize path start
start_pos = get_pos(0)
path.set_points_as_corners([start_pos, start_pos])
def update_epicycles(mob):
t = time_tracker.get_value()
curr_pos = np.array([0.0, 0.0, 0.0])
for obj in epicycle_mobjects:
data = obj[’data’]
radius = data[’radius’]
freq = data[’freq’]
phase = data[’phase’]
angle = freq * t + phase
dx = radius * np.cos(angle)
dy = radius * np.sin(angle)
v = np.array([dx, dy, 0])
obj[’circle’].move_to(curr_pos)
obj[’vector’].put_start_and_end_on(curr_pos, curr_pos + v)
curr_pos = curr_pos + v
pen.move_to(curr_pos)
path.add_points_as_corners([curr_pos])
epicycles.add_updater(update_epicycles)
self.play(time_tracker.animate.set_value(2 * np.pi), run_time=10, rate_func=linear) In short, optimizing the above videos should not be difficult, but I haven’t tried yet.
For example, regarding the main architecture and screenshot display of the two projects I am working on. Both videos are work in progress. The second video improved somewhat after I emphasized some optimization points. But the screenshot materials used in both videos are not good, because tests still need to be rerun to get optimized screenshots (and mcp is not as easy to use as antigravity’s browser-use, and if everything crashes then screenshots can only be run manually), so overall there has been no progress.
(check my youtube channel to see these videos)
I’ll stop here for now and will share more when there are new discoveries.
All my links: linktree-learn by doing with steven