Pragmatics of nano power radios

This is a brief note about high level concerns with nano power radios, solar powered without batteries.

Don’t rely on this, study it yourself, especially until I add proper links.  Some of it is just crude notes, even speculation.

Other References

A note at Mouser about ultra low power mcu design.

Context: nano power

The power supply:

  • provides low average current, around 1uA
  • has no large reserve
  • is is expected to provide zero current often (say every night)

For example:

  • solar power with a capacitor
  • no battery
  • indoor light
  • solar panel smaller than a credit card

Overview

  • radio is duty-cycled
  • a voltage monitor/power supervisor and load switch chip provides clean reset/boot
  • boot sequence must be short and monitor mcu Vcc
  • use a power budget for design
  • use synchronization algorithms
  • testing is hard
  • over voltage
  • energy harvesting

Duty-cycled radio

The radio is sleeping most of the time.  When sleeping, a low-power timer runs to wake the system.  The sleeping radio cannot wake the system when it receives.

Example: the system may sleep for a few seconds, and be awake (with radio on) for about a millisecond.  That is, the duty cycle is around 1000.

Voltage monitor/Load switch

A microprocessor (in a radio SoC) needs a fast-rising voltage to boot cleanly.  Otherwise it may enter a state where it consumes power without booting. (Fibrillating?)  It may be in that state for a long time.  The solution is to use an external voltage monitor aka power supervisor aka reset chip.  E.g. TPS3839 (ultra-low power of 150nA.)

You can’t just connect the voltage monitor to the reset line of the mcu.  Otherwise, the mcu will still consume power while its reset line is held in RESET state. (Between the time voltage is high enough for the voltage monitor to have active outputs say 0.6V and the time the voltage is high enough to run the mcu say 1.8V.)  An mcu may draw a fraction of a milliamp while held in reset.

So the voltage monitor drives a high-side load switch that switches power (Vcc or Vdd) to the mcu.  I use the TPS22860.  (You can switch ground i.e. low-side with a NMOS mosfet but it’s not so easy to design your own high-side switch.  You can’t switch the low-side of an mcu because many pins may leak to ground?)

Voltage monitor hysteresis and boot sequence

The voltage monitor asserts its Out (sometimes call Not Reset) at a certain threshold voltage but then unasserts if the voltage falls below the threshold a certain amount called the hysteresis.  While the mcu is booting, it must not use so much current that Vcc falls below the hysteresis.  The boot sequence typically does a bare minimum, then checks Vcc, and sleeps until Vcc is much beyond the the minimum.  That is, allowing time for the ‘challenged’ power supply to catch up and store a reserve.  Only then does the software proceed to use the radio, duty-cycled.

You could use a voltage monitor with higher hysteresis.  But they don’t seem to make them.  The hysteresis of the TPS3839 is only 0.05V.  You can play tricks with a diode/capacitor on the input of the voltage monitor to make it seem to have a higher hysteresis (to delay longer before un asserting.)  And there are application notes on the web about adding hysteresis to voltage monitors.  But they seem to apply to older voltage monitor designs, and don’t seem to apply to the ultra-low power TPS3839 (which samples Vcc.)

Also, you could design your own voltage monitor with more hysteresis.  For example, see the Nordic solar powered sensor beacon.  That uses a few mosfets to provide a 0.2V hysteresis (say booting at 2.4V and resetting at 2.2V).  Unfortunately, they don’t seem to have exactly documented how the design works.

Power Budget

A power budget calculates the average current of a system, given certain phases of certain durations, where each phase uses certain devices/peripherals.

Here the main phases are:

  • sleeping (say 1.5uA for 1 second)
  • radio and mcu on (say 6 mA for 1 milli second)

You can almost ignore any phases where only the mcu is active, it should be a small portion of your budget.

A discussion at Digikey.

Synchronization algorithms

These make your units wake at the same time, so they can communicate with each other.

A beacon is usually unsynchronized.  The thing that hears a beacon (e.g. a cell phone) has  enough power to listen a long time.  You also might not need to synchronize if you have a “gateway” that is always powered and listening.  (See Zigbee.)

This seems to still be a research topic, there is much literature to read and few open source code examples.

Testing is hard

With such a challenged, nanopower supply, testing is hard.  A bug may exhaust power so that the system brown out resets, losing information about what happened.

Most hardware debuggers make the target consume more power than the power supply can provide?  TI seems to have ultra-low power debugging tools, but I haven’t studied them.

You can implement fault/exception handlers that write to non-volatile flash so that you can subsequently connect to a debugger and read what happened.   Default handlers typically just infinite loop (which will brown out reset.)  Typical handlers will do a soft reset.  Unless your app makes a record or communicates that, you might not even know the system reset itself.

Agililent (formerly Hewlett-Packard) sells expensive instruments for monitoring power consumption.  These may tell you you when (in relation to other events) you are consuming more power than you expect, but not exactly why.

Over voltage

A solar cell is a current source, and provides a variable voltage.  Voc is voltage open circuit (when your capacitor is fully charge.)  It can exceed the Vmin of your radio (typically 3.6V.)

Voltage regulators (such as shunt regulators) that prevent that are themselves current wasters.

You can choose a solar panel whose Voc is less than the Vmin, but there are few choices in that range (Voc < 3.6V, Vope around 2.4V, for indoor light.)  Or you can require that your solar panel never be exposed to strong light.

I haven’t found a zener diode that would clamp the voltage to 3.6V, and not leak much, at such nano currents.

Energy Harvesting

This is another buzzword, but good to search on.  It often means: with a single coin cell battery.

Energy harvesting chips are available.  They solve some problems you might not have, such as over-voltage protection, or voltage boosting.

It often refers to other power sources such as heat or vibration.  Those power sources are usually even smaller than solar (light) power, but solar power is episodic (diurnal.)

Solar power in different setting differs by orders of magnitude.  Direct sun is ten times stronger than outdoor, blue-sky shade, which is ten times more than strong indoor light, which is ten timer more than  dim indoor light.

 

 

Advertisements

Writing custom libraries for Energia (Arduino)

This is just about the pragmatics of: where do I put source files so that they are a shared library?

Custom: one you write yourself.

Library: a set of C++ source files (.h and .cpp) that you want to share among projects.

The simplified Energia/Arduino view

Outside the simplified Energia/Arduino world, libraries would be in a separate, shared directory and they would be pre-compiled into an object and separately linked into your projects.  In the Energia/Arduino world, that is all hidden.

Also, in the Energia world, a library seems to be a zipped directory of source files that follow some conventions that identify the version and documentation of the library.   So you can share the library.  I don’t know what the conventions are.  But if you are going to share your custom library, you should follow the conventions, and zip it up.  Then others can use the simplified user interface for installing zipped libraries.  Here, I don’t bother with the zipping.

Creating a custom library

Briefly, you just need to create your source files in the place that Energia looks.

Find where your sketchbook directory is:  In Energia choose “Sketch>Show Sketch Folder.”  Expect a file browser dialog (the Finder on the Mac) to show you the directory.

You will see a sub directory named “libraries”, and it will probably be empty.  (I don’t know where Energia keeps all the other pre-installed libraries.)

In that directory, create a directory with the name of your library e.g. “PWM”.

In the “PWM” directory, create your .h (and maybe .cpp) files, e.g. “pwm.h”

Now switch back to Energia and select “Sketch>Include Library>”   Expect a hierarchal menu to appear.  Expect to see “PWM” in the “Contributed libraries” section of the menu.

You can also choose “Sketch>Include Library>Manage Libraries”.  Expect a browser kind of window to open.  You should be able to browse to a line saying “PWM version unknown INSTALLED”.  (In my opinion, this should not be called “Manage Libraries” because it seems all you can do is view a list of the libraries.)

(Note that Energia expects at least one source file in your library directory.  Until then, Energia may give an error “Invalid library found in….”)

Referencing the library

In your main sketch “#include <pwm.h>”

Then define an instance of the PWM class and call its methods.

Developing and managing your library

You can just edit the files in place, using another editor.   When you use Energia to “verify” the main sketch that uses the library, it will recompile your changed library.

By managing I mean: copy the files out of the sketchbook folder to a safer, more shared place.  The sketchbook is in /Users/foo/Documents/sketchbook (on a Mac).  I prefer to put them under source control in a “git” folder, or in the “Dropbox” folder, so when I am done developing, I copy the library folder somewhere else.

I suppose you could use git in that directory, and when you are done, commit and push that repository to a your shared (master) repository on github.

Brief Summary

A library is just a named directory in the directory “sketchbook/libraries”.  You can create a library yourself using a file browser and editor.

Some notes on Panasonic Amorton solar cells

These are just rough notes that might help someone else in their personal electronic projects.  About Amorton’s indoor solar cell products, AM1456, AM1417, etc.

These solar cells are like what you see in calculators.  They are only a few square centimeters or larger.  Typically like pieces of glass.

For low light

These indoor products are for low light.  They are characterized for as little as 50 lux, which is not much light, typical of an indoor space with average lighting.  A room with a sun facing window, on a clear or overcast day, typically has much more light.  Even typical artificial lighting provides this much light.

The power available at these lighting levels is only a few uA.  Also, the Vope (which is the operating voltage, which really means the maximum power voltage, see MPPT) is a fraction of the Voc (voltage open circuit)  in much stronger light.  For example, if the panel has four cells which each deliver a maximum of 0.6 volts, the Voc might be 2.4V but the Vope in 50 lux might be only 1.4V.

You should design your circuits to operate around Vope, since if you design to operate at the Voc, it will take strong light, and the power delivered will be smaller than you could get at Vope.

(PowerFilm does not characterize their film solar cells at such low light levels.  But they recently started selling LL3-37, which IS targeted for low light.)

Availability

Some of the glass ones are available from Digikey and Mouser.  The film versions don’t seem to be readily available in small quantities to retail buyers.

Solderability

The models that are commonly available have pre-soldered wires, AWG 30.  I have had good success in unsoldering the wires, leaving the solder ball, and soldering on a different wire (including tinned piano wire.)

Surface Mount

I also tried unsoldering the wire, cleaning the pad with flux and desoldering braid, and trying to reflow surface mount.  With very poor results (say one success in three.)  The manufacturer said this is not a supported use.  After the failure, you see a brown surface that solder won’t stick too.  Evidently the ‘interface’ between the solder and the semiconductor is very thin and its solder ability easily destroyed.

Some of the product variations for AM1456, AM1417 have no pre-soldered wire, but only conductive paste, and they are available only in large quantities (AFAIK.)  I don’t think these are intended for reflow soldering either.

Amorton recently started selling model AM1606, which IS intended for surface mount (SMD.)  Available from Mouser.

Durability

They seem relatively robust.  I have dropped them from desktop height onto concrete and they don’t seem to break.  I have had a few, small, conchoidal chips out of the edge, seeming cosmetic, not affecting the power out.

Under extreme mechanical stress, the soldered pads occasionally detach at the ‘interface’.  See above re surface mount.

Safety

The glass edges are not sharp.  I have never cut myself on the edges.  I suppose the manufacturing process somehow rounds the edges a little, even though they appear quite square.  However, I suppose the edges are intended to be enclosed in a frame.

But since they are small and glass, they ARE a hazard for small children, and if they should break into pieces.

 

Tutorial/strategy: layout a 2-layer PCB in KiCad EDA

This gives a high level overview or strategy for laying out a simple PCB in KiCad or other EDA tool.  This is for beginners.  It doesn’t give details, exactly how to use the user interface.  This is not polished.  It tries to teach something which is obvious only to experienced users, that might save beginners some learning time.

This is for a ‘simple’ design where:

  • most of the components are surface mount (SMD) on the front
  • the board is a 2-layer board (copper on the front and back, and not in a middle layer.  Such a board is the cheapest to buy, design, and assemble.)
  • there are few components, and few busses (bundles of signal lines routed together.)

The strategy is:

  • read the netlist
  • auto spread or place your components
  • add a board outline larger than you think the board will take
  • move and rotate the components into the board outline, to minimize crossing of rats nest lines
  • add a front zone for the ground net
  • add a zone on the back for the power net
  • run the DRC tool

Now many of the rats nest lines will be gone, since the ground and power nets are usually the largest nets, and the DRC tool will connect the front and back zones to many of the pads on those nets.  Most of the remaining rats nest lines are for “signal” nets.

Now add tracks and vias to any power pads (that are not through holes) to the power plane (zone) on the back.

Now iterate:

  • move and rotate components to reduce crossing of rats nest lines and to shrink the board area
  • run the DRC tool

When you have in some sense done all you can do to minimize crossing rats nest lines:

  • switch to the OpenGL view (push-and-shove only works in that view)
  • choose “Do not show filled areas in zones” (fill obscures tracks)
  • manually route the remaining rats nest lines, using push-and-shove instead of clicking at many places along the track’s route to make it go exactly where you want it.

If you need to move some components to get room for a signal track, use Grab instead of Move, since that will keep the tracks you have already connected to the component.

Now you might tweak by moving components and nodes of tracks, running the DRC often to check you haven’t violated design rules.  Generally you might tweak to reduce the size of the board, but it is better if you did that before you did manual routing.

Finally:

  • redraw your board outline
  • run DRC again and insure your zones are still contiguous, connected planes (if you reduce the outline too much, it might island your ground and power planes.  Generally a ring of copper around the board edge connects islands of the zone together.  The ground and power planes can have some enclosed island, but one point of a ground plane is to let the signals go where they want in short paths.)

 

 

0xFFFFFFFE, reclaim_reent() error symptoms of embedded programming

This is a report of one cryptic symptom (and possible) fixes you might encounter when you are embedded programming.  I report it because I have experienced it more than once and always forget what it means.

When you are trying to flash your embedded mcu,  the debugger seems to download to the chip, the debugger starts  and then stops showing a stack trace something like this:

0xFFFFFFFE
reclaim_reent()

Usually you expect the debugger to stop at main() and wait for you to tell the debugger to run (but that depends on whether you have configured your IDE and debugger to breakpoint at main.)

It might mean (not a program bug, a process error):

  • your linker script <foo>.ld describes the memory of your chip incorrectly
  • you haven’t erased the chip’s ROM yet

About the latter.  I am not sure, but modules you buy might already be flashed with a program such as a DFU bootloader, and are configured to protect a debugger from overwriting that code in ROM.  For example, on the Nordic NRF51, to remove the protection and erase all of ROM so that you can then use the debugger:

 nrfjprog --recover --family NRF51

 

 

Vagga Rust embedded

TL;DR

Work in progress: so far, a vagga container of Rust tools.  Eventually in the container, tools for embedded Rust (Xargo) and my own source project under version control.

A similar endeavour is Japaric’s “cross”.   The differences:

  • that uses a Docker container, here I use Vagga container
  • that might be ready to use, this is an explanatory exploration

A repository of source for this blog.

About

This is:

  • a log of my experience
  • well linked
  • for audience: developers/programmers.

Background

I have been programming embedded computers in C++.   I hate C++.  I have also used Python and Swift.  I read some background material:

So here I try to install Rust.  Ultimately I want to program in Rust an embedded ARM mcu on a NRF52 radio chip using the NRF52DK dev board.

Meta

Typically a developer knows/remembers how to set up a development machine (how to install the OS, development packages, an IDE, etc.)  Typically, you use the GUI, maybe write a shell script, take good notes, iterate when you discover packages missing.

Vagga helps you capture the entire process of setting up a development machine.  You capture the process in a vagga configuration file.  Which is a text configuration file or script; no GUI.

This blog itself is an annotated record of writing and debugging such a vagga configuration file.

Strategy

The Rust project moves fast.  I don’t want to struggle with keeping up to date.  I am not sure I will keep it.  So I will install Rust in a container.  A container is like a virtual machine, but lighter weight, and only on Linux.

Vagga github repository

Vagga implements containers.  Vagga is targeted for developers i.e. specialized to contain development environments. It seems like a natural fit.

Advantages of containers/virtual machines:

  • throwaway , non-invasive:  Can’t destroy your computer’s installation of non-development packages, your personal applications such as Gimp, LibreOffice )
  • distributable: you can give a container to other developers

Advantages of vagga:

  • is platform (Linux distribution) agnostic.  Vagga scripts might be portable to other developer’s machines (Linux-like.)
  • is a high-level package manager
  • userspace (doesn’t require root privileges)

Vagga is written in Rust.  (I hoped Vagga might even install Rust for me  but no, although written in Rust, I install a binary Vagga, which means I need to install Rust separately, but in a container.)  In this case, using Vagga is a form of “eating your own dog food”: if you are going to learn Rust, you might as well use tools that are written in Rust.

More meta

Vagga is a high-level package manager.  (discusses goals and future.)

Rustup is also a package manager (toolchain manager) exclusively for Rust.

Xargo is also a toolchain manager, exclusively for cross-compiling Rust language programs.

So it seems strange that to combat the proliferation of package managers, we invent yet another higher-level package manager.   And here we use a chain/graph of package managers:  vagga, rustup, cargo, xargo, your favorite Linux distribution’s package manager.

A high-level package manager makes more sense if you are targeting your app to many Linux distributions.  Here, I am only targeting (ultimately) one embedded architecture.  But by using a high-level package manager, I can distribute my development environment.  And I can easily replicate my home dev machine in other remote physical locations.

Installing Vagga

Vagga instructions for installation.

Per the above, on Ubuntu, just paste this into a terminal:

echo 'deb [arch=amd64 trusted=yes] https://ubuntu.zerogw.com vagga main' | sudo tee /etc/apt/sources.list.d/vagga.list
sudo apt-get update
sudo apt-get install vagga

(That adds a repository for vagga, and installs vagga from it.)

Putting an OS in my container

Vagga docs on configuration

Vagga is configured from a text file, vagga.yaml.

I created this simple directory tree:

rustdev/
    vagga.yaml

With the contents of vagga.yaml:

containers:
  rustdev:
    setup:
    - !Ubuntu yakkety
    
commands:
  test: !Command
    description: Test 
    container: rustdev
    run: [ps]
 

In other words, a container named “rustdev” and a command “test”.

Testing Vagga

At at terminal, change directory to “rustdev” and enter “vagga test”.  I got (unexpected):

(1/1) Installing alpine-keys (1.3-r0)
OK: 0 MiB in 1 packages
fetch http://repos.mia.lax-noc.com/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
ERROR: http://repos.mia.lax-noc.com/alpine/v3.5/main: No such file or directory

Which seems like a problem with online repositories.  I entered “sudo apt-get update”, and tried again.  This time vagga seemed to install an OS in the container (in about a minute) and run the command, yielding:

PID TTY TIME CMD
 1 ? 00:00:00 exe
 2 ? 00:00:00 ps

IOW, there are only a few processes running in the container.

Emphasizing:  in my experience I had to do this to get it to work:

cd rustDev
vagga test    (fails to get alpine-keys)
sudo apt-get update
vagga test

Installing Rust compiler in the container

From my reading, I know the rust compiler command is “rustc”.  Entering that at a command line, expect:

The program 'rustc' is currently not installed. You can install it by typing:
sudo apt install rustc

So I know that ubuntu has a package.  I don’t want it installed directly, but in the vagga container.

Make this change in vagga.yaml:

    - !Ubuntu yakkety
    - !Install [rustc]

IOW, tell vagga you want to install the package named “rustc” in the container.

Now  “vagga test” first checks the container configuration for updates and yields:

Generating locales (this might take a while)...
 en_US.UTF-8... done
Generation complete.
Reading package lists... Done
E: Method mirror has died unexpectedly!
E: Sub-process mirror received a segmentation fault.
WARN:vagga::builder::commands::ubuntu: The `apt-get update` failed. You have no mirror setup, and default one is not always perfect.
Add the following to your ~/.vagga.yaml:
 ubuntu-mirror: http://CC.archive.ubuntu.com/ubuntu
Where CC is a two-letter country code where you currently are.

So I created a file named “.vagga.yaml” in my home directory (this is a hidden “settings” file.  Do not change your yagga configuration file i.e. ~/rustdev/vagga.yaml) with the contents

ubuntu-mirror: http://us.archive.ubuntu.com/ubuntu

Now, “vagga test” yields:

...
E: Failed to fetch http://security....
...
E: Some index files failed to download. They have been ignored, or old ones used instead.
WARN:vagga::builder::commands::ubuntu: The `apt-get update` failed. If this happens too often, consider changing the `ubuntu-mirror` in settings

So I followed this thread to find the “best” mirror, and changed the mirror, but it still fails.

So now I rethink: I really want embedded Rust, which suggests the nightly build of Rust, not the outdated package that Ubuntu provides.  I don’t want to install packaged Rust, I want to install rustup….

Installing rustup

Rust install instructions.

Rust is usually installed by the “rustup” tool.  New goal: install rustup in the container.  It seems like Ubuntu does not package rustup separately.  So edit vagga.yaml to add the instructions given by Rust.org for installing rustup, wrapped in a shell inside vagga.   Neively:

    - !Ubuntu yakkety
    - !Sh "curl https://sh.rustup.rs -sSf | sh"

But those instructions download a shell script and pipe it to a shell and the shell script is interactive.  So I hacked some more.  Summarizing the struggle:

  • curl was absent from the container
  • the curl package would not install because of mirrors outdated
  • I switched OS version to Xenial (Ubuntu 16.04LTS) hoping the mirrors were more stable
  • I switched to wget instead of curl
  • the rustup shell script requires curl

Now I read vagga examples.  From github:vagga-examples/python  I found:

  • Most developers install an omnibus package “build-essential” that includes all the commands a developer typically uses.  IOW,  a bare OS unsuited is for developers.
  • vagga has its own construct for downloading files

And the yagga configuration file for yagga itself builds a Rust development environment (since vagga is written in Rust.)

Using those examples, I ended up with the script which you can find in my git repository.  I don’t include the script here, it may suffer revisions.

If you enter “vagga test” expect:

/work/.home/.cargo/bin/rustup
/work/.home/.cargo/bin/rustc
rustc 1.15.1 (021bd294c 2017-02-08)

Now I wondering whether I can run my IDE in the container, and how my source code gets into and out of the container (probably git.)  The answer seems to be that the directory where you invoke vagga is the “project” directory and is mapped into the container as /work.  Your IDE can work outside the container.  All artifacts of the build should be in the container and not pollute your project directory?

Brief notes

Create a hidden setup or options file ~/.vagga.yaml with contents: ubuntu-mirror: http://us.archive.ubuntu.com/ubuntu

Until you get your vagga.yml correct, vagga seems to repeatedly download dependency packages.  IOW, errors prevent completion of the container.  When you achieve a correct container, then vagga knows, and only downloads dependency packages as needed (when the repository publishes a security update or a nightly update?  Commands don’t establish dependencies?)

Doing “sudo apt-get update” between iterations seems to help some errors.

The directory where you invoke vagga is the “project” directory and is mapped into the container as /work.

Vagga stores the container in the hidden file .vagga in the project directory (alongside your vagga.yaml.)  (To delete a container?)

Continue with Part Two…

 

 

Using relative coordinates in KiCad to design mechanical aspects of PCB boards

TL;DR: press the space bar to set the origin of the relative coordinate system and then watch the dx,dy cursor coordinates in the status bar as you draw.

See  section 2.6. “Displaying cursor coordinates” of the Eeschema reference manual.

There are two coordinates systems (frames) in KiCad:

  • absolute: origin is in one of the corners of the “paper” sheet, displays as “X…Y…”
  • relative: origin is wherever you set it using the space bar, displays as “dx…dy….”

KiCad continually displays the location of the cursor in the right side of the status bar which appears near the bottom of the application window.  KiCad updates the displayed location even as you use some tool to draw, place, etc.  KiCad displays the location of the cursor in both coordinate systems.

Use the relative coordinate system to layout a board mechanically.  First set the origin, say to the upper left corner of your board:

  • move the cursor to where you want it
  • press the space bar.  Expect the relative coordinates to change to “dx 0.0000 dy 0.0000.”

Then as you draw, you can stop the cursor at some precise dimension.

KiCad does not persist the origin of the relative coordinate system (save it in your project.)  You need to set the origin at the beginning of each design session.

KiCad does not display any particular symbol at the origin of the relative coordinate system.  You can add a fiducial symbol at the origin.

Few people use the absolute coordinate system and many people complain that you can’t set its origin.  But they should just use the relative coordinate system.

From a user-interface viewpoint, maybe KiCad should:

  • place more emphasis on the relative coordinate system (display relative coords left of/preceding the absolute coords)
  • make the origin persist
  • add a pop-up menu item to set the origin (space bar is too obscure)
  • make the displayed nomenclature more consistent (why is it not “dX,dY and dx,dy” or “X,Y and x,y” or “aX, aY and rX, rY” or “abs x,y and rel x, y”)