Wednesday, May 1, 2024

Our mission is to provide unbiased product reviews and timely reporting of technological advancements. Covering all latest reviews and advances in the technology industry, our editorial team strives to make every click count. We aim to provide fair and unbiased information about the latest technological advances.

Glowing white and blue 3D wireframe image of a person leaping through the air over a black backdrop.

Credit: VentureBeat made with Midjourney

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


As the Hollywood actors’ strike marches forward towards its 100th day with no resolution in sight, a technological leap has just rendered one of the actors’ biggest complaints even more possible: 3D scanning of human bodies in motion, potentially allowing for actors’ performances and mannerisms to be captured and stored as a 3D model that could be re-used by studios in perpetuity.

Although 3D scanning technology has been around in Hollywood for decades, it has typically involved a complex and time-consuming setup — multiple cameras arranged 360-degrees around an actor’s body, or, in the case of capturing motion, using ping-pong ball like “markers” placed directly on the actor and a tight-fitted bodysuit. Even recent advances using AI, such as the UK startup Move AI, generally rely on multiple cameras (though Move has a new single camera app now in limited, invitation-only release).

But now, a new method has been achieved: Gaussian splatting, a series of equations which has in recent years been used to capture static 3D imagery from a single 2D camera that is moved in a sequence around an object, has now been modified by researchers at Huawei and the Huazhong University of Science and Technology in China to capture dynamic motion in 3D as well, including human body motions.

Their method is called “4D Gaussian splatting,” because time, being the fourth dimension, is the new feature, allowing for the image to change over time.

See also  Andy Umana is improving the real estate experience with the blockchain

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

Learn More

Why motion is so tricky for Gaussian splatting

3D Gaussian splatting was devised for scanning objects with lasers in 2001 by researchers at MIT, ETH Zurich, and Mitsubishi.

It uses collections of particles to represent a 3D scene, each with its own position, rotation, and other attributes. Each point is also assigned an opacity and a color, which can change depending on the view direction. In recent years, Gaussian splatting has come a long way and can now be rendered in modern web browsers and made from a collection of 2D images on a user’s smartphone.

However, as the researchers write in a new paper published October 12 simultaneously on Github and open-access site arXiv.org, “3D-GS [Gaussian splatting] still focuses on the static scenes. Extending it to dynamic scenes as a 4D representation is a reasonable, important but difficult topic. The key challenge lies in modeling complicated point motions from sparse input.”

The main challenge is that when multiple Gaussian splatters are joined together across different timestamps to create a moving image, each point “deforms” from image to image, creating inaccurate representations of the shapes and volumes of the objects (and subjects) in the images.

However, the researchers were able to overcome this by maintaining only “one set of canonical 3D Gaussians,” or images, and used predictive analytics to map where and how they would move from one timestamp to the next.

What this looks like in practice is a 3D image of a person cooking on a pan, including chopping and stirring ingredients, as well as a dog moving nearby. Another example shows human hands breaking a cookie in half and yet another opening a toy egg to reveal a nested toy chick inside. In all cases, the researchers were able to achieve a 3D rotational effect, allowing a viewer to move the “camera” around the objects in the scene in 3D and see them from multiple angles and vantage points.

See also  8 important ways to check passwords and improve safety, do you know them all?
Example of 4D Gaussian splatting. Credit: ‘4D Gaussian Splatting for Real-Time Dynamic Scene Rendering‘

According to the researchers, their 4D Gaussian splatting method “achieves real-time rendering on dynamic scenes, up to 70 FPS at a resolution of 800×800 for synthetic datasets and 36 FPS at a resolution of 1352×1014 in real datasets, while maintaining comparable or superior performance than previous state-of-the-art (SOTA) methods.

Next steps

While the initial results are impressive, the scenes of motion captured by the researchers in 3D takes 20 minutes, and only last a few seconds each, far from the amount of time needed to cover an entire feature film, for example.

But, for studios looking to capture an actor’s few motions and re-use them, it’s a great start. And for video game designers, XR/VR designers, it’s hard to imagine that this technique will not be useful.

And, as with many promising technological advances, the quality and quantity of what can be captured — over what time frame — is only likely to increase.

As the researchers write at the end of their paper, “this work is still in progress and we will explore higher rendering quality on complex real scenes in the subsequent development.”

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

See also  South Africa is proposing a change in how remote workers pay taxes and employers will bear the brunt

…. to be continued
Read the Original Article
Copyright for syndicated content belongs to the linked Source : VentureBeat – https://venturebeat.com/ai/actors-worst-fears-come-true-new-4d-gaussian-splatting-method-captures-human-motion/

ADVERTISEMENT

Denial of responsibility! tech-news.info is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

RelatedPosts

Recommended.

Categories

Archives

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

1 2 3 4 5 6 7 8 25 123 676 477898 547927 524636