This week, I’ve been diving into three fascinating technologies: Gaussian Splatting, Photogrammetry, and Nerfs (Neural Radiance Fields). Each of these methods offers unique ways to translate real-world elements into digital landscapes, perfect for applications in video games, movies, and other visual media.
Starting off with photogrammetry using just my phone, I snapped pictures of my kitchen and transformed them into a rather rough model, which I then brought into Unity.
From that model, I crafted a simple game—throwing in a couple of wizards and creating a shooting scenario. It wasn’t top-notch, but it served as a valuable test to gauge the process and speed of creation, despite needing significant refinement. Surprisingly, it only took about two hours to build everything from scratch.
Next, I experimented with Gaussian Splatting by using clips from my drone footage to generate scenes. For instance, I incorporated a drone shot of a tower in Portland, applying Gaussian Splatting to render it in 3D space.
These technologies have me buzzing with excitement! I’m currently exploring how to integrate Gaussian Splatting into my 3D and gaming projects, even contemplating creating scenes in videos using this technology. The ability to manipulate and generate various shots from a single scene is particularly intriguing.
While photogrammetry isn’t new, its simplicity remains captivating. Using just a phone or a LiDAR scanner, capturing environments and turning them into 3D landscapes within 10 to 15 minutes is mind-blowing. Adding gaming elements becomes almost instantaneous.
This week’s exploration has been a learning spree for me. One of my standout experiments involved the tower scene; its realistic appearance and lifelike movements as I navigated around it left a strong impression.
There are endless possibilities in exploring these technologies, and I’ve only scratched the surface. It’s an exciting journey ahead to see how these tools can further enhance my projects.
-Jesse (In collaboration with ChatGPT)