Back in 2012, we launched the first iteration of the Sherman media wall. As documented here, the hardware we used to push content to the seven screens was an eclectic mix of recycled PC/Mac laptops, Raspberry Pis, Android boxes, and so forth. While that had various purposes–e.g. to demonstrate that computers come in many sizes, packages, and costs points and that many computers will run for years beyond the point we expect them to die–it also led to support problems as no two were alike. Most were also not networked, so changing content required an acrobatic foray into the claustrophobic space behind the wall. Net effect: we didn’t change the content very often, and spent too much time crawling around in that less-than-friendly space to fix cranky machines.
The new design started with the notion that we wanted to standardize the technical platform and get them all on the network. As the project developed, other design criteria emerged, some by choice, others of necessity, such as using Raspberry Pi 2 boards to power all of the screens. Their low cost, low power usage, simplicity, and durability make them a solid choice for running one application day in, day out. Another choice we made was to use private IP addressing since both McMaster and the Library are running a bit short of public IP addresses. Since these machines only do one thing and aren’t intended for public access, private IP addressing of the type that a home router uses makes sense.
It turns out that some of the things we took for granted didn’t pan out. One of Matt’s first discoveries was that browsers that run in Raspbian–the Linux version commonly used on Raspberry Pi models–are terrible at rendering streaming video. It plays poorly, if at all, due to inherent graphics acceleration issues. Solution: loop video files outside the browser using omxplayer, a video player designed for Pi. That worked well, but we discovered that for larger video files, looping would fail. Turned out that that was a GPU issue. By simply dedicating more of the Pi’s onboard memory to GPU (by adding the line ‘gpu_mem=128’ to /boot/config.txt), even massive video files loop well for days or weeks on end.
Using private addressing also proved to be a bit of a challenge in an enterprise environment where you can’t just drop in a home router and hang a bunch of machines on it. In the end, we opted for a software rather than a hardware solution. Matt repurposed a retired desktop into an OpenBSD powered router, behind which we put a simple switch to which we attach the Raspberry Pis.
A parts list for a given screen would look like this:
- HDMI-capable monitor or TV
- short HDMI cable
- Raspberry Pi 2 or newer
- power supply for Pi
- micro SD card for Pi
- network cable
- optional: case for Pi (we used this fabulous design from Thingiverse)
As you can see, other than the monitor/TV, this is not an expensive parts list, perhaps $60-80 CAD per screen depending on source. That’s trivial compared to even an inexpensive computer such as a Chromebox.