YetiBorg v2 review

YetiBorg v2 review

Recently at The MagPi we took a look at the DiddyBorg v2 robot kit from PiBorg. This relatively large kit from the PiBorg team is a fantastic, robust robot kit that is fantastic for veteran Raspberry Pi robot builders wanting something a bit more advanced to play with. We don’t think it’s the best kit to choose for beginners, though, which is where the YetiBorg v2 comes in. This article first appeared in The MagPi 71 and was written by Rob Zwetsloot In comparison to the DiddyBorg, it’s pretty small, although it’s definitely not the smallest Pi robot kit out there. Unlike other beginner-friendly robot kits, it includes all the high-quality parts and chassis you’d expect from a PiBorg kit. This quality comes at a price, though, and at £160 it’s quite a bit more than your classic robot starter kit. Construction is pretty simplified, with a fantastic step-by-step guide that takes you through the entire build process. There’s no soldering involved as it comes complete with pre-soldered motors and a Raspberry Pi Zero with a pre-soldered GPIO header. We received our YetiBorg fully constructed, in fact, but we estimate you’d be able to build it in under an hour, and the software won’t take you long to sort out either. This kit comes with a ZeroBorg, a quad motor controller designed with the Pi Zero in mind. While it may be smaller than the ThunderBorg controller used in the DiddyBorg, the ThunderBorg is only able to control two (or two sets of) motors at a time. This means the YetiBorg is truly a four-wheel-drive robot. Like the ThunderBorg, you can stack ZeroBorgs to add more motor controls if you wish, and while it is designed around the Pi Zero form factor, there’s no reason you can’t use it with a full-sized Raspberry Pi if you so wish. High performance The YetiBorg comes with example scripts to get you started, including a remote control script using a game controller, a web interface that lets you see through a mounted Pi camera (not included), automated scripts, and more. You can use these to learn how the robot works and then cobble together your own scripts so the robot will do as you wish. While the ZeroBorg code is still quite complex as per the ThunderBorg code, it’s a bit easier to understand overall. It’s no GPIO Zero but it’s still readable,…
Source: YetiBorg v2 review

MYHouse: Smart IoT doll’s house

MYHouse: Smart IoT doll’s house

When a master’s degree course at the University of Washington required the use of sensors and machine learning in the same project, two students – Maks Surguy and Yi Fan Yin – conceived the idea of an interactive doll’s house. Inside this cool crib, various features – including lighting and shutters – can all be turned on and off by the simple wave of a ‘wand’ (a PlayStation Move controller), with the help of some clever coding and a Raspberry Pi 3.

  “I thought a smart doll’s house would be a great tool to demonstrate technical innovations to people in an approachable way,” says Maks, who worked with Yi Fan over a ten-week period, designing and constructing the clever little doll’s domicile. After consulting Maks’s architect wife about the physical structure, the pair drew the plans in 3D modelling software, then fitted together cardboard pieces for a prototype. Once happy with the design, they laser-cut the pieces out of plywood, made use of snap‑fit to join them, then painted them in different colours. According to Maks, building a doll’s house is akin to building a real house. “Lots of decisions needed to be made about dimensions, colours, structure, function, and interactions between all elements of the dollhouse. We ended up simplifying a lot of the elements through iterative process after realising that what we envisioned is actually a lot harder than it seems. Thankfully we had 24/7 access to a makerspace here in school and were able to reach decisions through prototyping every aspect of the construction.” MyHouse: gesture recognition and response A key characteristic of this smart doll’s house is its ability to recognise gestures and respond accordingly. Having done a great deal of research into gesture recognition, “trial and error went into choosing what gestures perform best across individuals while remaining intuitive to most people,” says Maks. “We read a lot of research papers on gesture recognition and then came up with our own gestures that worked with over 90 percent accuracy.” In total, seven gestures – pre‑trained using machine learning – are stored in the system, and the Raspberry Pi reads the information from the PlayStation Move and then determines if the gesture is similar to one of the stored ones. As Maks explains, if the gesture is recognised, “various functional items in the dollhouse can be activated or deactivated using these pre-trained gestures:…
Source: MYHouse: Smart IoT doll’s house

Wildlife camera with object recognition

Wildlife camera with object recognition

Ever wondered what lurks at the bottom of your garden at night, or which furry friends are visiting the school playground once all the children have gone home? Using a Raspberry Pi and camera, along with Google’s Vision API, is a cheap but effective way to capture some excellent close-ups of foxes, birds, mice, squirrels and badgers, and to tweet the results. This article first appeared in The MagPi 71 and was written by Jody Carter Using Google’s Vision API makes it really easy to get AI to classify our own images. We’ll install and set up some motion detection, link to our Vision API, and then tweet the picture if there’s a bird in it. It’s assumed you are using a new Raspbian installation on your Raspberry Pi and you have your Pi camera set up (whichever model you’re using). You will also need a Twitter account and a Google account to set up the APIs. You’ll need Pi Camera Module Pi NoIR Camera Module (optional) ZeroCam NightVision (optional) Waterproof container (likea jam jar) Blu Tack, Sugru, elastic bands, carabiners ZeroView (optional) Motion detection with Pi-timolo There are many different motion-detection libraries available, but Pi-timolo was chosen as it is easy to edit the Python source code. To install, open a Terminal window and enter:cd ~ wget https://raw.github.com/pageauc/pi-timolo/master/source/pi-timolo-install.sh chmod +x pi-timolo-install.sh ./pi-timolo-install.shOnce installed, test it by typing in cd ~/pi-timolo and then ./pi-timolo.py to run the Python script. At this point, you should be alerted to any errors such as the camera not being installed correctly, otherwise the script will run and you should see debug info in the Terminal window. Check the pictures by waving your hand in front of the camera, then looking in Pi-timolo > Media Recent > Motion. You may need to change the image size and orientation of the camera; in the Terminal window, enter nano config.py and edit these variables: imageWidth, imageHeight, and imageRotation. While we’re here, if you get a lot of false positives, try changing the motionTrackMinArea and motionTrackTrigLen variables and experiment with the values by increasing to reduce sensitivity. See the Pi-timolo GitHub repo for more details. There’s also going to be some editing of the pi-timolo.py file, so don’t close the Terminal window. Code needs to be added to import some Python libraries (below), and also added to the function userMotionCodeHere() to check with the Vision API before tweeting. We…
Source: Wildlife camera with object recognition

Ghost Chess: using electromagnets to move board pieces

Ghost Chess: using electromagnets to move board pieces

On 10 February 1996, a chess-playing computer called Deep Blue sent shock waves around the globe by beating world champion Garry Kasparov in the first of their six games. But the IBM machine did so without moving any physical pieces itself – a human noted the computer’s move and performed it manually on the board. Had Ghost Chess been around back then, however, such intervention would have been redundant. Designed by final-year MEng students Tim Ness, Alex Angelov, and Alex Smith from the University of Glasgow, Ghost Chess makes use of a robotic arm connected to a Raspberry Pi running the world champion chess program, Stockfish (stockfishchess.org). It focuses attention on a physical board, using motors and an electromagnet to pick up, move, and place 3D-printed chess pieces within the squares depending on the moves dictated by both human and computer. As such, it’s a mini-marvel – a prime example of a real-time embedded system. “We wanted to create something that was fun, memorable, and slightly more challenging – an ‘automatic’ 3D chess game from scratch,” says Tim. What’s more, they wanted to make it as unobtrusive as possible. “We kept the robotics underneath the board because it was an easier way of designing the system,” Tim adds. “It keeps all of the electronics and the moving mechanism out of the way.” Sensors are positioned below each of the 64 squares. They detect which squares are occupied and feed the information back to the software Ghost Chess: a multilayered project Indeed, the project is broken down into five layers: the chess pieces, the board, a matrix of sensors, the mechanical arm, and, at the bottom, the Raspberry Pi 3. Of those, the arm was the trickiest part to develop. “Creating a design without having access to expensive ball races/bearings was a real challenge,” Tim recalls. T-slot bars bolted to a plywood base act as runners for two shuttles and these are hooked up to timing belts mounted through a couple of pulleys to a stepper motor, allowing the arm to move left, right, up, and down. Meanwhile, a matrix of 64 latching Hall effect sensors (one for each square and capable of varying the output voltage in response to a magnetic field) lets the setup detect which spaces on the board are occupied. From that point on, it’s up to the software running on the Pi to work its magic.…
Source: Ghost Chess: using electromagnets to move board pieces

Seeing Wand uses Microsoft AI to describe things

Seeing Wand uses Microsoft AI to describe things

Inspired by a blind cousin who would ‘look’ around his environment by way of touch, Robert Zakon has built a Seeing Wand that can speak the name of whatever it’s pointed at. Housed in a makeshift PVC tube, a Pi Zero is connected to a Camera Module that takes a photo when a push-button is pressed. The image is sent to Microsoft’s Cognitive Services Computer Vision API to get a description, which is then spoken – using the open-source eSpeak speech synthesizer – through a Speaker pHAT. This article first appeared in The MagPi 71. Click here to download a free digital copy of The MagPi magazine. A Pi Camera Module is used to take photos of items, while speech is output through a Speaker pHAT What does the Seeing Wand do? “I was looking for a way to teach my kids about innovation through integration and had been wanting to test out both the Pi and emerging cognitive computing services,” explains Robert. “They were a bit sceptical at first, but warmed up to it and thought the end result was pretty awesome (their words). My eldest helped with assembly, and both aided in testing.” Seeing Wand:Microsoft’s Cognitive Services Computer Vision API Robert’s debut Raspberry Pi project, it came together over the course of a few weekends. Asked why he chose Microsoft Cognitive Services over other image-recognition APIs, Robert responds: “Microsoft did a nice job with the API and it was fairly straightforward to integrate with. There was no particular reason for choosing it other than it appeared to be robust enough and free to use for our project.” The results surprised him in terms of accuracy and level of detail: “People, pets, and large objects seem to be the sweet spot.” Even when the wand gets it wrong, the results can be amusing. “My kids had a lot of fun whenever something was misidentified, such as pointing at a toy robot on a table and having it identified as ‘a small child on a chair’. Another example was pointing at our garage with a sloping roof and being informed there was ‘a skateboarder coming down a hill’ – still not sure what it thought the skateboarder was. My favourite, though, had to be when we pointed it at clouds and heard what sounded like ‘Superman flying across a blue sky’.” As per its original inspiration, however, the Seeing Wand…
Source: Seeing Wand uses Microsoft AI to describe things

Super Tinytendo Case

Super Tinytendo Case

In issue 68 we reviewed the Kintaro NES case – a simple concept that’s been done by many folk with access to a 3D printer, although it had the neat little gimmick of a flip-up cart-slot that gave you access to the USB ports on the Raspberry Pi. This article first appeared in The MagPi 70 and was written by Rob Zwetsloot On the outside, the Super Tinytendo looks pretty basic in comparison – especially at the price tag of £27. However, it hides a big surprise: a fully functional case fan. This may not sound like much, and it probably won’t help you in your quest to create the perfect tiny retro console, but it’s a rare thing for a Pi case to include one. It connects very simply with a little plug over a couple of GPIO pins, with vents along the bottom to allow air flow. Easy fitting Otherwise, fitting the Raspberry Pi inside the case is very straightforward. Four sturdy screws keep both parts of the case together, and removing them reveals an obvious recess for the Pi to sit in. It can be then screwed down to the bottom of the case, although you will need to provide the screws for this yourself. It will stay in place without these screws, but it’s not particularly sturdy. Access to power and AV ends up at the rear of the case, as on a classic SNES, with the USB ports exposed on the side of the case for controllers. The original American version comes with a power LED that’s missing from the UK version, although it does make it easier to set up because of this. The case itself is created from injection moulding and is pretty sturdy because of it. The Power and Reset buttons from the original SNES are lovingly recreated, although they’re unusable and serve as a reminder that this is the ugly, purple-and-grey American SNES body and not the classic, sleek European one. Still, transatlantic aesthetics aside, it is a pretty nice case, although one that might actually end up being more useful for people that regularly push the Pi to the limit and need some extra ventilation options. Last word 4/5 A great, sturdy case with surprisingly good ventilation options that make it useful beyond its intended retro gaming applications. The post Super Tinytendo Case appeared first on The MagPi Magazine.
Source: Super Tinytendo Case

Spaceplane: High-altitude Raspberry Pi glider

Spaceplane: High-altitude Raspberry Pi glider

Izzy Brand, a student at Brown University, USA, created a clever system to recover the data from his high-altitude balloon (HAB). Rather than use a parachute and geolocator, he instead fitted the data module to a glider set to land at preprogrammed co-ordinates. Izzy reveals, “I had the idea a long time ago – maybe early high school (2013).” Initially the idea was just to find a way to fly a glider, dropping it from a hot-air balloon, as Izzy’s “nearby hills weren’t steep enough.”

SpacePlane: High altitude Raspberry Pi glider The glider uses a Raspberry Zero W and a Pixhawk, a flight controller powered by an ARM processor. “I chose the Zero W,” Izzy explains, “because it can run MAVProxy, essentially a terminal version of the GUI-based ground station software used to control the Pixhawk.” Izzy chose the Pixhawk due to his familiarity with its predecessor, the ArduPilot. At 10 000 m, the Zero W turned on the autopilot mode and “triggered a solid-state relay to burn the nickel-chromium wire and release the glider.” Guided landing using Pi Zero W Izzy explains that the Pixhawk module’s autopilot mode operates on a system of waypoints, so he set “only one waypoint co-ordinate at the target landing location, with an elevation of zero.” In addition, the Pixhawk doesn’t have a glider mode, so Izzy had to “set the maximum ascent angle to zero so the glider wouldn’t try to climb without a motor and thereby stall.” Amazingly, after launching the balloon and driving to the landing site 122 miles away, the glider was waiting just 10 m from the target location. “We were astonished,” Izzy tells us. “This project failed miserably in 2015, [as] the glider landed in a forest about ten miles from its target.” Izzy would like to thank his friends Luke Fisher and Nick Menz for their help “in testing the glider and on the launch day.” You can find the source code and flight logs here. The post Spaceplane: High-altitude Raspberry Pi glider appeared first on The MagPi Magazine.
Source: Spaceplane: High-altitude Raspberry Pi glider

Cocktail Machine

Cocktail Machine

What’ll it be? Monkey wrench, swimming pool, zombie, painkiller? You can have any of these exotic drinks and more mixed by Stefan Höving’s home-made cocktail machine, whose secret ingredient is a Raspberry Pi. This article first appeared in The MagPi 70 and was written by Nicola King Bioengineering student Stefan came up with the idea for automated cocktails thanks to a friend who “would always pour way too much alcohol into the drinks when we would get together on the weekends… We basically needed something that can ensure that everyone always has the same composition of juice and booze, that would still be enjoyable.” Stefan was also looking for a project to practise his fledgling Python programming skills, learned during his part-time job at an analytical institute. “The first thing I had to do was program a graphical user interface in PyQt for a temperature control system,” he recalls. While that never came to fruition, the GUI would eventually be put to good use in his cocktail machine, which offers a choice of nine drinks via a touchscreen display. Tubes and valves Housed in a handcrafted hexagonal wooden case, the cocktail machine holds five bottles. Each is fitted with a shot dispenser with two tubes: air is pumped through one tube to force liquid up the other, which leads to a magnetic valve to turn the flow on or off. “From there, all five tubes (one for every bottle) are funnelled into the outlet that one can see above the glass,” explains Stefan. To ensure the correct volume measures are poured out, the platform where the glass is placed is a scale. “The first thing the program does after a cocktail is selected is [discount] the weight of the glass. I was surprised by how precise the scale actually is.” Using his woodworking skills, Stefan made a hexagonal case with six triangular compartments It did cause Stefan a headache during development, though. “For some reason, [its HX711 ADC] chip would produce random and really off values, although there was nothing on the scale. This would only happen when the machine was completely assembled. I reassembled the machine three times until I understood this behaviour.” In the end, the annoyingly simple solution was to connect the VCC of the HX711 to 3.3 V power instead of 5 V. Pumping air Stefan also had a problem with the original aquarium air pump, which worked…
Source: Cocktail Machine

Modular making with Tibbo-Pi

Modular making with Tibbo-Pi

Tibbo Technology’s Tibbo Project System (TPS) has been around for four years, but it’s finally coming to the Raspberry Pi. Rather than using breadboards or soldering, the TPS uses Tibbit blocks to “implement an I/O function” by plugging them into a TPS ‘mainboard’ (motherboard). Announced at this year’s Computex technology show in Taiwan, there will soon be a mainboard that incorporates a Raspberry Pi 3.

Tibbo Projects System for Raspberry Pi It is unclear whether the Pi 3B+ will be supported, but with 60 Tibbit blocks to add anything from power relays to audio converters to real-world sensors, you should be able to construct almost any project as quickly as a basic LEGO model. Details are still in the process of translation, but Tibbo confirms that Node-RED (JavaScript) is the standard coding language for Tibbo-Pi, and that C and Python are alternatives. An English brochure for Tibbo‑Pi is planned, available through co‑works.jp/tibbo-pi, but this site is in Japanese. However, Tibbo’s main site lists all the currently available Tibbits in English. The post Modular making with Tibbo-Pi appeared first on The MagPi Magazine.
Source: Modular making with Tibbo-Pi

Use TensorFlow AI on Raspberry Pi

Use TensorFlow AI on Raspberry Pi

Google TensorFlow is a powerful open-source software framework used to power AI projects around the globe. TensorFlow is used for machine learning and the creation of neural networks. These make it possible for computers to perform increasingly complex tasks, such as image recognition and text analysis. When it comes to AI, most people think of powerful supercomputers crunching billions of numbers in giant databanks. But there are two parts to machine learning. There is a train/test part, where you use a lot of data to build a model. And there’s deployment, where you take a model and use it as part of a project. And that’s where the Raspberry Pi fits in. Although Raspberry Pi isn’t officially supported by Google, there are example models included for the Raspberry Pi and it can be fun (if a bit hacky) to get TensorFlow up and running on a Pi. And there are lots of interesting community projects around that put TensorFlow to good use. Using TensorFlow can give you a good understanding of how AI works, and how to put AI to practical use in your projects. STEP-01 Install TensorFlow with pip TensorFlow can be incredibly easy to install on a Raspberry Pi, or something of a nightmare. It depends on the current build and which version of Raspbian OS you are running. Installation is often troublesome, but we’ve had recent success with building it directly using pip. Open a Terminal window and enter:sudo apt-get update && sudo apt-get upgrade sudo apt-get install python3-pip python3-dev pip3 install tensorflowSTEP-02 Build from wheel If pip doesn’t work, you can try to build TensorFlow using the wheel file. In a Terminal, enter:sudo pip3 install –upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0-cp34-cp34m-linux_x86_64.whlAlternatively, you can use a nightly wheel built for Raspberry Pi, which is available from magpi.cc/xKLBzu. Download the wheel file and run it, like this:sudo pip3 install –upgrade tensorflow-1.9.0rc0-cp34-none-linux_armv7l.whlTake a look at TensorFlow’s Install Sources page or Common Installation Problems page. STEP-03 Build from source If pip fails, you could always build TensorFlow from source; Sam Abrahams has written detailed instructions on doing so (magpi.cc/oCYtme). You will need a spare USB stick (1GB or higher) to extend the amount of swap space on your Raspberry Pi and be sure to follow the instructions carefully. It takes around six hours to build, but we have gone through the steps and they do work. STEP-04 Hello TensorFlow Hopefully, you now have TensorFlow up…
Source: Use TensorFlow AI on Raspberry Pi