Armin Tajik

Game Developer

This is my attempt to prototype a mobile game idea that uses magnetic forces and fluid simulation inspired by ferrofluids. I chose Unreal Engine 5 for the task. After setting up the project repo using Perforce and preparing the build pipeline for iOS, I was ready to make the prototype.…

This is my attempt to prototype a mobile game idea that uses magnetic forces and fluid simulation inspired by ferrofluids.

I chose Unreal Engine 5 for the task. After setting up the project repo using Perforce and preparing the build pipeline for iOS, I was ready to make the prototype.

I started with the magnetic field, the idea here is when you click on an astroid, it becomes magnetic and attracts the particles. It’s not the best gameplay mechanic I know, but it was a good enough start to test the magnetic forces.

Prototyping ferromagnetic forces

Upon my initial investigation, I realized that although UE5 has a powerful fluid simulation system, it doesn’t work in macOS or iOS. Honestly, I didn’t expect it either, as I’m already surprised that UE5 runs smoothly on M-series MacBooks!

I stumbled upon this brilliant post while searching for another solution. The idea is to keep the number of particles limited (64, for example) and use a rendering technique called ray marching to smooth these particles together into a fluid-like substance. These smoothed spheres are also called metaballs.

This is how different levels of smoothness affects metaballs

As far as I know, Unreal Engine doesn’t support metaballs or even ray marching by default. In the post mentioned above, the author uses a plugin, but it is outdated now. I tried finding other add-ons but didn’t find anything good. Therefore, I decided to create it from scratch. This way, I had more control over the results and the opportunity to learn ray marching along the way.

However, this decision meant that I had to write some HLSL in UE5. I’ve had prior experience writing HLSL code in Unity, and while Unity supports HLSL really well, it’s not the same in Unreal Engine; You can write custom HLSL code, using a Custom node in material editor, but it’s really user-unfreindly (or unuser-friendly?) and you can’t even reference a file and have to use a plain text field to write your code. I was searching for resources on that and found this epic (pun intended) live training session.

This was a great beginning as it helped me create a metaball shader. However, I wanted to share the location of the spheres at runtime without having to create a hundred nodes (to be exact, 128 for 64 spheres) in material editor, then reference them in C++. The real nightmare was changing the number of spheres. Imaging redoing all of that again.

That’s why I used render targets to feed the spheres’ location to the custom node, which loops over the pixels and sets the spheres’ location and radius based on their RGBA value.

Transferring run-time data to the custom shader using render targets.

Here’s a method for doing that. I’m sharing it here for anyone who wants to use it, UE’s documentation is not the best. The numbers are all hard-coded, as this was a feasibility test.

void URenderTargetHandlerComponent::StoreSphereLocations(UTextureRenderTarget2D* RenderTarget,
                                                             const TArray<FVector>& SphereLocations) const
{
	if (!RenderTarget || SphereLocations.Num() != 64)
	{
		UE_LOG(LogTemp, Warning, TEXT("Invalid render target or sphere location count."));
		return;
	}

	FVector2D DrawSize = FVector2D(RenderTarget->SizeX, RenderTarget->SizeY);
	UCanvas* Canvas;
	FVector2D Size;
	FDrawToRenderTargetContext Context;

	UKismetRenderingLibrary::BeginDrawCanvasToRenderTarget(
		GetWorld(), RenderTarget, Canvas, Size, Context);

	if (Canvas)
	{
		for (int32 i = 0; i < 64; ++i)
		{
			int32 X = i % 8;
			int32 Y = i / 8;

			FVector Location = SphereLocations[i];

			FCanvasTileItem TileItem(FVector2D(X, Y),FVector2D(1, 1),
			                         FLinearColor(Location.X, Location.Y, Location.Z, SphereRadius));
			TileItem.BlendMode = SE_BLEND_Opaque;
			Canvas->DrawItem(TileItem);
		}

		UKismetRenderingLibrary::EndDrawCanvasToRenderTarget(GetWorld(), Context);
	}
}

At this point, All the particles were Actors, and I used the UStaticMeshComponent class for particles and their AddImpulse() method to move them around. This approach was good enough to test things out but not efficient, especially considering that I didn’t need most of the UStaticMeshComponent features. I thought of using USphereComponent, but even then, I was using a single CPU thread to process something suited for the GPU.

That’s why I considered the Niagara System (not the Niagara Fluid System). It made sense because I wanted to use GPU while having physics and collisions, and the Niagara System had all of these. I just had to connect user input to the particle system and the particle system to the ray marching shader.

Using Niagara System

At first, I wanted to calculate the attraction force for each particle, but it was too slow. While I had a lot of “fun” trying to create a Niagara Data Interface, I did not use it and just sent one vector as the mean magnetic center to the system.

Here, I packaged the prototype for iOS and tried it on my phone. It was really slow! And I wasn’t surprised because ray marching is an expensive technique. I tried lowering the sample count to reduce the lag, but it caused weird visual artifacts. To solve that, I added some noise to the sampling distance, which fixes those problems but adds noise to the final result. To embrace (!) the noise, I added a post-processing effect that adds some noise to everything.

Lowering the sample count caused noise

I tweaked the parameters more and removed the “asteroids” to get a better visual result, and here I got. This version runs smoothly on an iPhone 15 but it’s still way far from a production-ready state.

While it was a great experiment, I parked the idea here to work on more exciting stuff!

Leave a Reply

Your email address will not be published. Required fields are marked *

+ ,