Certainly looks something like that. It is more intelligible once you recognize the function definition pattern:
Code:
function_name () { commands ... }
The function name in question is ":". Inside the function it calls itself twice, connecting the stdout of the first instance to the stdin of the second instance. This keeps both running indefinitely. This pipeline is executed in the background, so the function can return. The ;: at the end terminates the function declaration and executes it.
So, because it's calling itself and returning without waiting for the child invocations to finish, you quickly end up with exponential growth of numbers of running instances of this function.
Exactly what happens when you get a lot of instances of this function isn't completely clear to me. As far as I understand it, they will all execute in the same shell process. So I would imagine you'd pin your CPU and either exhaust memory or fill up that shell's allocation of file handles. If the ulimits are set badly on your system (too many Linux distros don't set paranoid ulimits!), you might fill up the whole file tables in the kernel.
Perhaps someone would care to execute this on a test machine and report what it does (whether it creates lots of processes, or just one which swamps memory / pegs the CPU?).