This tutorial shows the commands to compile and install the “hello world” of kernel module compilation using the stock Angstrom distribution that comes on the BeagleBone.
This only requires a few opkg commands. I didn’t see this anywhere on the newsgroups or forums…no idea why.
This tutorial explains how to locate and configure the GPIO pins for input, output, pull-up, pull-down, high-z, etc.
I couldn’t find this information clearly stated in any one place, so here’s my attempt. If the TI wiki was actually a wiki that allowed people to edit it, I’m sure something like this would already be there.
This tutorial shows how to access any part of the main memory from within the PRU, and reasons you shouldn’t.
Using a few infrared detectors with various wavelengths placed near the intake valve, you could pretty accurately estimate the maximum temperature inside the combustion chamber when the valve opened.
The internals of the combustion chamber would reflect infrared pretty perfectly, allowing it to bounce out of the intake valve while it was opened. The multiple infrared detectors would have overlapping response curves to allow a multi-spectral infrared color estimation. Using the maximum detected color, you could estimate the temperature by comparing it to a blackbody radiator.
The valve-closed levels could be used to cancel any ambient levels, and the signal could be filtered against the changes in intensity from the changes in valve aperture size as the valve opened and closed.
Being near the intake, the sensor might stay clean clean. A window could be placed in the side of the intake pipe to keep the sensor out of harms way, with a light collector of some kind focused on the intake valve.
Maximum temperature is nice to know since it is what would cause any pre-ignition.
To help prevent car sickness while reading, use the “make sure the horizon is in view” approach by back-illuminating the page you’re reading with a pattern that remains, from an inertial perspective, stable.
A notepad holder/device cover (like iPad) with a low res led pattern, a gyro, and an accelerometer. The led matrix would display a pattern that would remain relatively constant in space, held by the gyro and accelerometers.
So, as you bounced your way down the rail line, the page would appear to move around on the pattern, matching the acceleration signals from your semicircular canals.
With a decent graphics card, this could be built into laptops/tablets to move images, text, or the whole screen image, around on the display space.
Fixed patterns would also make an interesting Halloween costume.
For a display, have an agreed-upon, and constantly changing, pseudo-random pixel/row/column update order. This would propagate up to the actual row/column selectors. This would mean that the signal couldn’t be observed except at the actual output of the display row/column selector, and would have to monitor each individual to grab the video data. This would mean monitoring and post process a huge number of signals to get a raw image.
Maybe this is how it’s done.
ideally, Full google earth integration
realtime weather display with forecasts on touch
“pinch” zoom, with display centered at touch point
tap drag -> rotate with friction
tap tap drag -> continuous rotate
begin rotate after timeout
realtime/selected night/day display
rotating information banner
rotates, flick support
rotating notification banner
alerts (news, voicemail, email triggers, etc)
bluetooth phone integration
caller id display
from iphone/android contacts sync
contact selector and dialer
probably ftir, meaning transparent dome, projection screen separate (how do you get it in there, heat form?)
This is probably”unpython” and whatever, but I do a lot of automation, and being able to access properties based on some index (like a port number) is pretty important. This is especially true since getting a property isn’t reading a variable, it’s actually reading the value from some piece of equipment.
And, in case pastebin dissapears:
# This was created based on a response from Alex Martelli at:
# There’s probably a way to do this with an object (without the .property)
# Allows using a property with a parameter, such as
# portSpeed = 1000
# print portSpeed
# Initialize by specifying the getter and setter functions.
self.getparam = getparam
self.setparam = setparam
# create a class with get and set
class PropertyArrayAccessor: pass # class to return
def getter(__, parameter): # create getter function
return self.getparam(parameter) # return the value from the function specified during init
def setter(__, parameter, value): # create setter function
self.setparam(parameter,value) # call the function specified during init
# property will use these get and set functions
PropertyArrayAccessor.__getitem__ = getter
PropertyArrayAccessor.__setitem__ = setter
return PropertyArrayAccessor() # return the class object
Using a metal brush, a low pass filter, and a schmit trigger as digital clock source for the calculation circuit, you could make a horribly inefficient electronic watch that used the position of the sun (by a low res ccd or even photodiodes) and a compass to calculate the time. Can’t use a crystal or rc, because that would be cheating! Might as well just have it keep the time!
Relating to the bandwidth segregated data transmission…
I picture chips
- |=======| - photoelectric layer
- |=======| - charge storage
- |=======| - adjacent storage distribution
- |=======| - logic
- |=======| - diode and transistor junctions (photo).
- photoelectric layer to supply power, probably wouldn’t have to be too efficient
- store and filter the power
- some sort of low voltage drop rectifier to allow adjacent, power hungry, cells to continue being power hungry
- logic - maybe a mix of fixed functional blocks and programmable logic
- diode and transistor junctions to act as silicon pn junction leds and photodiodes
When the wafer cracks, the diode and transistor junctions would act as data paths between the cracks. Since the power is supplied locally, each section of the crack would only need light, some surviving logic blocks to handle addressing each cracked section for handling communication (select best diode/transistor junctions, handle programming, handle data network addressing of the cracked piece, dead space identification, etc).
Maybe the bandwidth segregated communication could help quickly program the devices in the different tiers of bandwidth capability. Some bandwidth related identification could be used as well. Send some pseudo-random signal, take incremental time averages and apply a threshold to end up with a code that would help identify the sections parameters (bandwidth capabilities, addressing blocks, task assignment, etc).
For the programmable logic, I suppose it would be something like an FPGA, with functional blocks that acted as programmable logic, or more analog like a neural network.
I picture window sized pieces, with all of their flaws and cracks, computing away, covering the face of every building.
In the dream that I saw these buildings in, they were powering some AI that was governing the world…but I suppose society would probably use it for some futuristic version of Angry Birds. :)
Is it possible to scramble data so that the signal can be broken into channels based on an average over time? Usually in communication, the average is kept to zero using a known and syncronized psuedo-random sequence (a scrambler). Could the sequence be modified (by observation and modification or sub-channel) to allow distribution of data based on bandwidth capability? This would allow a bus with fast and bandwidth limited devices to receive and automatically segregate data based on bandwidth, on a shared line. This could also work for chained, bandwidth limiting, series connections, like analog repeaters.
Maybe for transmission, slower devices could disrupt the signal for faster devices, causing the faster devices to modify the sequence to correct.
The goal wouldn’t be signal speed, but signal distribution based on bandwidth.
Fancy mirror turn signals LED strip activated by wireless transmitter attached and powered to the actual turn signal light. The LED strip would be powered with 12v accessories.
A low power, inductor powered, cheap microcontroller embedded in a tire could be used to detect and warn of tire wear. The microcontroller could measure the change in resistance of thin wire molded into the tires surface. When these filaments broke, from being exposed to the roads surface, a tire wear indication could be sent to the driver. When deeper filaments break, a warning indicator could be displayed.
The filaments could be arranged many ways:
Resistance measurement of bundles,
individual strands at multple depths,
bundles at multiple depths.
In all cases, resistance or open/closed type measurements could be made. As filaments broke in the bundles, the resistance would increase. The depth, spread, and number of the bundles or individual filaments could be chosen to give smoother or coarse steps in depth detection.
I assume this was patented a looong time ago.
Pretty obvious idea. Using the aerial and street view of Google Maps, and some simple texture/feature recognition and “design rules”, maps for driving video games could be made. Actual road data would be pretty reliable. Stop lights and signs could either be found using street view at each intersection, or randomly assigned since players will fly through them anyways. Buildings, trees, and general terrain could be extracted from the satellite views. Accuracy isn’t all that important, but some fairly realistic level structures could be made.
For a cheap, high precision position sensor, could use a linear ccd from a scanner, project a laser through a diffraction grating (or pinhole) with a known diffraction pattern, then do some simple curve fitting/energy minimization of the visible pattern to find the position. Curve fitting rather than point tracking would be used to get sub pixel resolution. Since the light is directly from the laser (bright), short “shutter” times could be used. By knowing a 2d diffraction pattern, x and y position could be calculated (and rotation if the pattern isn’t polar).
The higher precision would come from the averaging of the noise across the sensor width. Using a wire shadow, slit, or “knife edge”, you can only use the intensity information from the pixels at the edge(s) of the feature casting the shadow. Although, with the diffraction pattern, there would be a slight reduction of usable area caused by any fringes out of the dynamic range of the sensor. This could be helped by using a pattern with a limited intensity range (translate sensor so main lobe isn’t visible).
Something like this might provide an extended range compared to a capacitive or mirrored method, and might be more robust! The pin hole/grating movement could be pretty extreme compared to the active sensing area and not cause damage, like a capacitive sensor.
Expense would be in the microcontroller. DSP type features might be needed to process complicated patterns at reasonable speeds.
Of course, using a more standard system with a time average would most likey be just as good, but much more boring.