Pages

Tuesday, May 5, 2015

Greenhouse Phase 2: Temperature and Humidity Sensors

The next phase of the greenhouse is to add some sensors to the mix. The initial round will be for measuring temperature and humidity both inside and outside of the greenhouse and measure windspeed outside the greenhouse. Windspeed measurements in the greenhouse would hopefully be pretty boring data, except perhaps if someone left the door open.

The greenhouse was initially meant to be mostly underground, what is called a walipini, but as they were digging they hit bedrock rather soon. If we had done that we would have had a reasonably temperature constant greenhouse as the temperature stays reasonably constant when underground. Instead we ended up with a climate battery. The concept here is to pump heat into the ground during the day when it is warm, or at least not too cold, and heat up the ground, and then release that heat back into the greenhouse at night. This is accomplished by having a fan that blows air into a series of tubes buried in the ground of the greenhouse.

I am very curious to see how well the climate battery works, so want to capture temperature and humidity data for a few weeks with the climate battery off and then turn it on and see how much the temperature and humidity profiles smooth out.

After some searching, I decided to use the AM2315 temperature and humidity sensor from Adafruit. The sensor comes in a case that protects the inside sensors. The description on Adafruit says that it isn't really rated for being outdoors, but I have read about a lot of people using them that way so I will give it a try.

One immediate complication is that the AM2315 is an I2C device. For those unfamiliar with I2C, this is a two wire protocol for communicating with a series of devices on a serial bus, one wire for a clock and a second wire for data. Each I2C device has an address and you can read or write to the devices by giving the address of the device you want to read or write from. Unfortunate the AM2315 has a fixed I2C address (0x5C), so if you want to use 2 of them, you need 2 independent I2C buses.

My initial design used 2 Arduino Unos, once for each AM2315. I am using Arduinos for now as I want to experiment over time with which sensors I am going to use and want to be able to easily throw in other sensors on a breadboard. Then, at some point in the future, I can built something a little more permanent. 2 Arduinos means 2 USB ports to get the data out to a host for processing, so I decided after a while to go with the Arduino DUE, which is the only Arduino with 2 I2C buses on it.

I got the Adafruit AM2315 from its git repository at https://github.com/adafruit/Adafruit_AM2315 and wired the AM2315 to the I2C bus at pins 20 and 21 on the DUE. A quick check and everything was working just fine.

That second AM2315 ended up being more of an issue. The Adafruit library only supports the I2C bus on pins 20 and 21 and I wanted to use the code on both buses. I took a look at the source of the library and found that the Wire instance variable was hardcoded directly into the AM2315 code, so modified it to allow the user to specify which bus to use.

The initial class definition in Adafruit_AM2315.h had

class Adafruit_AM2315 {
 public:
  Adafruit_AM2315();

and I changed it to

class Adafruit_AM2315 {
 public:
  Adafruit_AM2315(TwoWire* w=&Wire);

This allows the usage Adafruit_AM2315 foo() to still go with Wire, but I can now also specify if I want Wire or Wire1, though with a slightly more complicated syntax.

I also needed to add a private instance variable of TwoWire* wire to the class definition.

Adafruit_AM2315 foo(&Wire1);

This required some changes to the Adafruit_AM2315.cpp code, I did a replacement of Wire. to wire-> throughout the source and modified the Adafruit AM2315 constructor from


Adafruit_AM2315::Adafruit_AM2315() {
}

to

Adafruit_AM2315::Adafruit_AM2315(TwoWire *w): wire{w} {
}

You can find the modified library at my github repo https://github.com/kmhughes/Adafruit_AM2315.

The circuit for both AM2315s, the anemometer, and a barometric pressure sensor (which should be ignored for now) is as follows.


Note the 10k pullup resistors on the I2C lines, the Arduino will not see the AM2315s if these resistors are missing.

The Arduino code to read the sensors is


#include <Wire.h>
#include <Adafruit_AM2315.h>
#include <Adafruit_MPL3115A2.h>

// The inside AM2315 temperature and humidity sensor.
Adafruit_AM2315 am2315Inside(&Wire);

// The outside AM2315 temperature and humidity sensor.
Adafruit_AM2315 am2315Outside(&Wire1);

// The sensor data packet containing all data to be transmitted
// to the host.
typedef struct {
  float temperatureInside;
  float humidityInside;
  float temperatureOutside;
  float humidityOutside;
  int windSpeed;
} SenseData;

SenseData senseData;

void setup() {
  Serial.begin(9600);

  if (! am2315Outside.begin()) {
     Serial.println("Outside AM2315 sensor not found, check wiring & pullups!");
     while (1);
  }

  if (! am2315Inside.begin()) {
     Serial.println("Inside AM2315 sensor not found, check wiring & pullups!");
     while (1);
  }
  
  // TODO(keith): initialize barametric sensors
}

void loop() {
  am2315Outside.readTemperatureAndHumidity(senseData.temperatureOutside, senseData.humidityOutside);
  am2315Inside.readTemperatureAndHumidity(senseData.temperatureInside, senseData.humidityInside);

  senseData.windSpeed = analogRead(0);
  
  // TODO(keith): Remove when sending data to host
  Serial.print("Outside Hum: "); Serial.println(senseData.humidityOutside);
  Serial.print("Outside Temp: "); Serial.println(senseData.temperatureOutside);
  Serial.print("Inside Hum: "); Serial.println(senseData.humidityInside);
  Serial.print("Inside Temp: "); Serial.println(senseData.temperatureInside);

  Serial.print("Wind Speed: "); Serial.println(senseData.windSpeed);
  
  // TODO(keith): Write data to host computer
  
  // TODO(keith): change sample rate to every 5 minutes or so
  delay(1000);
}

Here you can see that the inside AM2315 is on the first I2C bus on pins 20 and 21 and the outside AM2315 is on the additional bus. The anemometer is connected to analog 0 on the Arduino.

You may wonder why am I using a struct to store the sensor data? As you will see, once the code is complete, this will make it very easy to write all of my sensor data in a single line to the host computer. But for now I am debugging, so will leave the Serial prints in and have the code look annoyingly complicated.

This code can be found at https://github.com/kmhughes/robotbrains-smartspaces/blob/master/arduino/GreenhouseSensors/GreenhouseSensors.ino

One thing I discovered is that even if the AM2315s are sitting physically next to each other, they give different temperature values. So I will have to calibrate them and subtract an offset from both values calculated from a known accurate sensor.

At the moment I cannot get the barometer, which is also an I2C device, working on the DUE though I did make it work on an UNO. Once I sort that out I will modify the code and post the host side that will be capturing the data and storing it for later graphing and analysis and show the wiring inside and outside the greenhouse.

Monday, May 4, 2015

The Jetson TK1 and the Caffe Deep Learning Project

I have been very interested in starting to use OpenCV for adding vision to my interactive space projects. I recently bought a Jetson TK1 DevKit to help me with this. The Jetson TK1 is a development board for the NVIDIA Kepler GPU, a GPU built for mobile platforms. The GPU has 192 CUDA cores, which meant that I could not only use an accelerated OpenCV for computer vision processing, I could also start experimenting with parallel programming on a GPU. The book CUDA By Example by Sanders and Kandrot has been very useful in learning how to program the cores, I imagine I will be talking more about learning how to program a GPU in future posts. The TK1 has OpenCV libraries available for it that take advantage of the CUDA cores and can do quite rapid image processing

The TK1 also has a quad core ARM Cortex processor for its main CPU, GPIO, I2C, SPI, and a ton of other features. Not bad for $192, or just $1 per CUDA Core.

Here is a picture of the board along with the solid state hard drive I attached to it.




There are easy to use instructions to get started with the TK1 at https://developer.nvidia.com/get-started-jetson. These instructions include how to flash the operating system. The site http://www.elinux.org/Jetson_TK1 includes a lot of great information, including how to install the CUDA libraries and how to install the CUDA accelerated OpenCV. The only thing I recommend is a reasonably fast Internet connection as some of the system components are quite large, for example you get the entire Ubuntu 14.04 system image to flash onto the board in one download.

One thing I am very interested in is starting to add much more intelligence to the interactive spaces I want to build.  I don't remember exactly how I found it, but I recently discovered the Caffe Deep Learning Framework open source project. You can get a lot of information about it at its website at http://caffe.berkeleyvision.org/. Deep learning is a machine learning technique that learns how to extract representations of data and then recognize pattern in those representations. These representations help recognize things like faces, or, as I saw in an ACM article today, solving the Cocktail Party problem. This problem is something many people are familiar with, you are at a party, or some other place with lots of people, and you can tune out everyone else's voices and hear just one conversation. This has been a very hard problem for computers to solve, but deep learning systems have made it possible.

Caffe supplies a general framework for deep learning. It comes with a series of models for doing object recognition in images and I hope to find it useful in a variety of other deep learning tasks.

When I installed CUDA on my TK1 I got version 6.5. The standard installs in the instructions from the above web sites gave be version 4.8 of the Gnu C and C++ compiler.

To install Caffe I first installed the following dependencies:

sudo apt-get install libprotobuf-dev protobuf-compiler gfortran \
libboost-dev cmake libleveldb-dev libsnappy-dev \
libboost-thread-dev libboost-system-dev \
libatlas-base-dev libhdf5-serial-dev libgflags-dev \
libgoogle-glog-dev liblmdb-dev


Next I installed the Caffe source. If you don't have it already, first install git.

sudo apt-get install -y git

After git is installed, create a directory where you want to install Caffe.

git clone https://github.com/BVLC/caffe.git
cd caffe

git checkout dev
cp Makefile.config.example Makefile.config


Now you can build Caffe.

make -j 8 all

Once the build is complete, it is a good idea to run the tests and make sure your install worked. This worked for me the first time, so it seems hard to screw it up.

make -j 8 runtest


If all the tests pass, you can then run the timing tests.

build/tools/caffe time --model=models/bvlc_alexnet/deploy.prototxt --gpu=0

The -gpu=0 at the end tells the code to run the tests on the GPU. The command

build/tools/caffe time --model=models/bvlc_alexnet/deploy.prototxt

will run it on the CPU of your computer, not the GPU.

 The test runs over 10 different versions of the image per pass, so to figure out how much time was taken for an image recognition per image, look at the 'Average Forward pass' time and divide by 10.

On the TK1, I found  approximately 25 msec per image on the GPU, while on the CPU I got 602 msec per image. Quite the difference.

Next I am trying to get the NVIDIA cuDNN library working with CAFFE. I have version 2 of the cuDNN library and apparently that means I need to follow special instructions to get Caffe to use it.

But once I get it compiled in, I will start figuring out how to create my own deep learning neural network models and will write about how that goes.