For my new (it’s almost 3 months old already) podcast we had an idea of streaming the recording live. Also I had decided to leave Skype for Mumble. Skype was getting too unreliable and Mumble supported separate channel recording for each connected client, which would make post processing much simpler.
So I setup a spare box (Pentium(R) Dual-Core E6300, 4GB) to run Archlinux (which turned out to be kind of a bad idea in the end). I am lucky to get a pseudo-static IP from my ISP but I got Dynamic DNS setup using ddclient just incase.
Next was to run a Mumble server (Murmur). It’s pretty easy to setup and usually has been downstreamed to the most distro repos. I setup a single “room” on the server and ensured the server was password protected. With that I was able to connect using the Mumble clients inside and outside my network and have a proper podcast recording at high quality. With some port forwarding on my router I had the server accessible from outside. iptables were of course necessary to keep the naughty people away.
Icecast was the obvious option for the streaming bit. It’s pretty easy to setup (again look in your repos) and has a ton of features to control the audio quality and other server settings. Again port forwarding was needed for the Icecast server as well.
The only interesting issue here was streaming music. We wanted to stream Creative Commons Music before and after the live recording, so I set up ices0 source client to stream music from CCMixter. But thanks to FF on OSX not supporting mp3 (it’s supposed to change soon), I had to also run an ices2 instance streaming ogg music for everyone on FF.
So next, streaming the actual Mumble conversation out. How do I stream the audio out? Mumble or Murmur don’t really have an elegant way to pipe out the audio. But I found a way to use Pulse Audio to basically “pipe” audio from a Mumble client using an Icecast source client called DarkIce.
The couple of hoops one has to be jump through to get this work. Firstly we need a Mumble client running on the same server as the Icecast server. Running a Mumble client requires a GUI and hence I had to install GNOME on my headless Arch install (sigh!). Using the GUI I setup the Mumble client to automatically connect to the Mumble Server. Now I could just launch the Mumble client from command line.
Next was getting the DarkIce to work. I got a lot of help from this post on doing something similar. DarkIce can pipe any Pulse Audio Stream to an IceCast server. So I created a dummy(null) Pulse Audio Stream which can be set as the audio output (System : Pulse Audio & Device : Null Output) in the Mumble client and on the other end used to feed DarkIce using the “.monitor” functionality of Pulse Audio Streams. The DarkIce config in the repo has the relevant config setup.
With that setup in DarkIce and in the Mumble Client, we had the audio from the Mumble conversation happily being streamed out over Icecast. DarkIce also supported transcoding and streaming multiple streams (mp3 and ogg) to the Icecast server (Yay!).
I was cheap and lazy and ran all the servers on a single machine, but you can of course separate them on different boxes/VMs. This was the final setup..
With everything setup, I could enable/disable radio and Mumble streaming from command line. I had to do a final hack to allow the Arch to run GNOME even when no monitor was attached. I could do a VNC virtual display, but I did a more hardware hack by making a “Fake Display” using some resistors on a VGA connector. This way I could connect a monitor anytime I wanted.
I have uploaded all my config scripts to a github repo here. Feel free to ask any questions
And you can listen to radio streaming (when it’s running) from the server here.