Jingrong Zhang | 张镜荣
Jingrong’s work explores the intersection of art, design, science, and cities, with a focus on social behavior in public spaces, racial and gender equity, and urban greenery and biodiversity. Her projects have been featured and supported by the Council for the Arts at MIT, World Economic Forum, Venice Biennale, and Shanghai Library. Trained in urban planning, she holds a master’s degree in Applied Urban Science and Informatics from New York University’s Center for Urban Science and Progress.
< Home >
Email: jingrong.zhang@nyu.edu
CV
Experience Research Fellow
MIT Senseable City Lab
2023 - present
GIS and Mapping Specialist
Data Services, NYU Division of Libraries
2022 - 2023
EducationNew York University
MS in Applied Urban Science and Informatics
2022
Tianjin University
BEng in Urban Planning
2020
ExhibitionStreet Scores
Interactive Installation & Performance, MIT Open Space
2025
Eyes on the Street
19th International Architecture Exhibition, La Biennale di Venezia 2025
Re-Leaf
19th International Architecture Exhibition, La Biennale di Venezia 2025
Word as Image
Shanghai Library
2023
Talks Visual Empathy in the Age of Data
Data | Art Symposium, Harvard University
2025
Visualizing Seshat: Unveiling Patterns in Human History with Seshat Databank
Complexity Science Hub
2024
The Electric Commute: Envisioning 100% Electrified Mobility in NYC
NYC Open Data Week
2023
Services
NYC Open Data Ambassador Trainee
[Tree-D Fusion]
About
The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”
What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change.
Tree-D Fusion is a new system developed by MIT CSAIL, Google, and Purdue University that creates accurate, simulation-ready 3D models of real urban trees from just a single image, such as a Google Street View photo. By combining deep-learning–generated structural envelopes with genus-conditioned procedural growth models and Google’s Auto Arborist dataset, the system reconstructs not only visible geometry but also hidden branches, producing a continent-scale database of 600,000 lifelike tree models across North America. These models allow researchers and cities to predict how individual trees will grow under different environmental conditions, assess future conflicts with infrastructure, and design climate-resilient, equitable urban forests. Tree-D Fusion reframes trees as dynamic, evolving systems — enabling continuous monitoring of urban canopies and offering a powerful tool for planning, environmental justice analysis, and next-generation ecological digital twins.
Learn more at MIT News
Learn more at Venice Biennale
Contribution: model/render/visualization
Workflow