Saturday, July 2, 2016

OpenDataSTL, Code Until Dawn, and Papers We Love

The Meetup groups in the title are what I would consider my current favorites as far as Saint Louis tech meetups. Since arriving in St Louis and going to these, I've started keeping a github repository with my own projects. At Code Until Dawn recently, I did the final work on my neural net code. I got to talk to a lot of programmers (20-30ish) and saw what each of them were working on. One of them was doing tutorials and website learning projects on FreeCodeCamp (this was on 29 April 2016). I decided to try it out myself. Freecodecamp.com walks you through learning to code in several key areas in a systematic way. There are a few area I can strengthen (front end development) and new areas I can learn about to increase my Javascript knowlede (NodeJS). www.freecodecamp.com It also makes it easy to link up with nonprofit and do projects for them to build a portfolio. Since the above, I've gotten a little more familiar with Node.js and installing projects for it using npm. I got to the first project module, and it is for making your first website. I am working on two other areas, Three.js and Leaflet web mapping, which will lend themselves to that project when I'm at a good milestone point. I've gone through the whole CSS bit (there are 76 small lessons in the HTML/CSS category), but feel like I need to go back and take notes this time. Other notes from the 29 April Code Until Dawn meetup at TechArtista: Super cheap GPS chip for small projects - the NavSpark mini ($6). A $25 software defined radio device called the NooElec. While helping BK on his RPi speed issue, I also found https://github.com/jetpacapp/DeepBeliefSDK/tree/master (which has the source code for running a neural net on the RPi GPU). The RPI has an open sourced SDK for deep learning on the GPU (Broadcomm open sourced the design notes for the GPU) now. Was reading some blogs, and found that the RPi also comes with an FFT library that uses the GPU (after doing an update): http://www.raspberrypi.org/accelerating-fourier.../ This would be perfect for software defined radio project work. Other: this talks about machine learning on the Pi: https://scientistnobee.wordpress.com/2014/06/20/machine-learning-with-raspberry-pi/ ------ 8 February, 2016: Relevant to open government / OpenDataSTL: I was browsing Kaggle.com, a website focused on teaching people data analytics and about deep learning (and also trying to be a hub for the emerging career field of data scientist), and I saw that the US Department of Education has a competition there. For that competition, they are using a dataset pulling from student aid records and tax records. That means there's an open resource for seeing what the cost/profit tradeoff will be for a specific school and degree: https://collegescorecard.ed.gov/data/ ------ 20 Sep, 2015: Went to a OpenDataSTL / Coders for America meetup tonight. There were five mapping / GIS companies represented. Most of them worked in the mapping companies or were web developers. One guy was from the history museum and wanted help taking pictures of the 1904 World Fair that was here in Saint Louis and mapping it in 3D. Apparently, the mapping companies have processes to do that. I might help them out with that project. ------ 4 Sep, 2015: Another technology that the intelligence community has attempted to keep out of the hands of the citizens it is supposed to be protecting: Went to a 2600 meeting at the Arch Reactor for a pgp key signing party. Exchanged signatures with 7 people, and spent the rest of the time listening to them talk. Essentially what a key signing party does is let groups of people make an open source verified web of trust. The larger these get, the more crypto communications happen, which makes widespread warrantless surveillance much harder.
There is a web of key servers that peer, exchanging keys and key relationships. A key has a hash, a name, and an email address, all of which are searchable via the key servers' web pages. From many different clients or the linux command line, you can perform the functions to interact with any of the key servers. Commands follow.
To make your own key from the command line: "gpg --gen-key" Follow the instructions, choose the defaults for your first time, select a pass phrase. You'll likely have to hit keys and move the mouse around to help it generate some random numbers.
At the end, it should list your key information. To show it again, "gpg --list-keys". Mine shows: "pub 4096R/8F6B884E 2015-09-04" as the first line. "pub" just means that's the public key, the one you'll want to share/put on the key-servers. 4096 is the key length, R has to do with the key generation algorithm (RSA), and 8F6B88E is the key hash. You need the key hash for the next command.
This command sends the key to a public key server: "gpg --send-keys --keyserver sks-keyservers.net 8F6B884E" (note that there is a large community of key servers which are synching their key directories with each other every hour or so)
After exchanging key hashes with everybody, people looking at my ID to verify who I was and signing the key, I got back home and am going through signing everyone's keys who signed mine. Here are those commands:
I wrote down enough information on each person to find their keys on the key servers via the sites' search - either the hash or an email address. Next, I use this command to retrieve the public key for that person: "gpg --recv-keys --keyserver pgp.archreactor.org 42B7C552"
Note that I used a different key-server this time - not important, all the key-server peers exchange keys with each other every hour.
Once that's done, send the key back in so that other people can see that you've signed their key: "gpg --keyserver pgp.archreactor.org --send-key 42B7C552"
That's it as far as creating a new PGP key and putting it out on the public key server / address lists, and making a web of trust by signing peoples' keys / having them sign yours. The only other thing to do is to encrypt and decrypt messages using each others keys.

------

I think I posted this on FB sometime in July, 2015: https://www.youtube.com/watch?v=iH7osNm5raw

-------

28 Mar, 2015: So, I've been slowly working toward this talk I'm supposed to do for Papers We Love Saint Louis on basis functions / Wavelets. I've scanned through a bunch of scholarly articles linked by Google Scholar and have downloaded a bunch of them. I've noticed that when I research a subject, I get a ton of browser tabs open, end up with a lot of disparate files, and don't have very effective ways to keep things organized. What I end up with is a lot of folders with files in them, and I end up losing track / forgetting about half of them. I think what I need is a mindmapper (type of software) that will allow you to quickly link a file locally, as well as allow links to URLs that will open in a browser. -- dad commented "go for it, you can do it"

-------

7 Mar, 2015:
Ouch, still scanning papers. Stressful. Not sure exactly on layout of the talk either, only have a vague idea at this point, and I have to keep studying because it's over my level.
http://www.meetup.com/Papers-We-Love-in-saint-louis/events/220109658/ (update Jul 3, 2016: I was trying to get ready for an advanced talk for Papers We Love on Apr 20th - it was over my head and I was struggling - I ended up not being ready by that first date, but gave the talk in June, if I remember correctly on the date - it went very well that time - I ended up talking about Wavelet functions).

I've been reading about basis functions, and one of the things I've noticed is how they are kind of thrown over a data set, falling like a web around the data points (well, getting adjusted around it I guess, need to keep reading).

Comments:
(me): One of the cool parts about the project I've been working on for my job lately is that it's forced me to learn about CSS. So now I'm actually proficient in making a bit nicer looking web pages. Probably will put a page back up, maybe link it with an FTP server, and share a folder with groups of papers, along with links to another github directory (will probably put one up myself - the Papers We Love github directory is too sparse).
(me): Working on configuring the ftp server (vsftpd process) - this page is useful for finding things on LInux: http://www.cyberciti.biz/faq/howto-find-a-directory-linux-command/
(me): I've been having one problem at work in which in my CSS configuration, trying to do div show and hides from javascript (using several different methods to do so) keeps completely failing. Not sure what the issue is, but after spending 3-4 hours on it, I talked to a coworker who mentioned using CTRL-F5 to refresh the browser cache...also saw some comments on forums of people having the issue that indicated the CSS config might be the problem. Not sure at this point.
(me): http://www.staroceans.org/documents/Wavelets%20for%20computer%20graphics%20%20A%20primer.pdf
(me): I found this very relevent MIT lecture set that I've been watching, and I'm going to use the above paper as a placeholder. I'm not going to constrain myself to the paper by any means, but intend to give an overview of how basis functions are used, as well as the process of throwing them against an image / going back and forth between the input image and output matrix. I'm going to try to have working code and bring a webcam. The video will be thrown up on Youtube of me giving the talk.
http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-28-similar-matrices-and-jordan-form/


---------

19 Feb, 2015:
Update:
I've found out that I can get access to a very large amount of research papers for free via Google Scholar. If you type in a search term, any link without a PDF to the right is behind a paywall. If you were to buy papers at the rate they are being sold by these middlemen, you would be out of money faster than you'd even have enough material to do anything with. However, about three per page of results have a PDF download link to the right. I've found that multiple resources often have the same papers, with some being available for download for free, and the majority charging a high price for access. This is because the academics themselves are not getting paid by the paywalls for their papers, but they are uploading to a lot of locations - the great many of them which turn around and charge outrageous fees for work they are getting for free.

Machine Learning / Computer Vision

2 July, 2016: So I've spent a bit over 1K hours reading about and watching video lectures on neural nets. I was able to code the core algorithm for feed forward / back propagation in javascript. Here's the code I wrote: https://github.com/DiginessForever/randomCode/tree/master/machineLearning

I still need to gather more data to train it - haven't been able to use it yet because I don't have it hooked up with a dataset yet. Some possibilities would be making it learn to walk a robot (staying upright, being given a desired direction and status of whether it is standing/walking or has fallen down), or I could do the "hello world" project and have it recognize characters (letters and numbers).

Other updates: Elon Musk and co has come out with OpenAI Gym.

I've been following along with Adrian Rosenbrock (spelling?) PyImageSearch blog site (bought his book and virtual image of Ubuntu as well).

I also found this recently:
http://www.computervisionblog.com/2015/01/from-feature-descriptors-to-deep.html


To watch later:
https://www.youtube.com/watch?v=3lp9eN5JE2A "Evolving AI Lab: Deep Learning Overview & Visualizing What Deep Neural Networks Learn"
https://www.youtube.com/watch?v=szHRv4MwCBY "Microsoft Resarch: Recent Advances in Deep Learning at Microsoft: A Selected Overview"
Soft Robotics, a youtube video channel: https://www.youtube.com/playlist?list=PL9Bl8hjGqGOUaRjANlTqGmPWtYmM99lfM
---------------------
Post from May 6th, 2016:
Next step for my neural net project - learn to use the Raspberry Pi deep belief SDK (gives access to the on board GPU's 20 GFLOPs - a lot faster than it would be if you only used the CPU, which also has to run the operating system). Yay - I don't have to hack a solution using assembler on the GPU anymore (that was the worst headache - I spent quite a good bit of time reading the open sourced documentation for the GPU, and it wasn't going to be pretty, then would only be good for one device). I am just about done with the primary code for my javascript neural net. I definitely recommend coding a solution out if you do not understand a subject completely - it forces you to wrap your mind around it, and if you think you understand it but do not, you'll find out very quickly when it doesn't work.

Last weekend, Elon Musk and a bunch of other researchers and companies interested in AI released a tool, OpenAI Gym. It's what I thought I was going to have to code myself - environments that give visual feedback / let your neural net control a model/actor and solve a problem. The cool thing about this tool is that (at least according to the documentation) it is framework agnostic, meaning you can use multiple deep learning libraries to train the neural nets, then give the trained net to the tool, where it works to solve the problem.

One cool thing I've found out - the feedforward/backprop neural net is the foundation for all the deep learning research that's been going on. I am still at a pretty severe disadvantage when compared to real researchers though - they have top of the line GPUs so can run much larger networks much faster.

However, having programmed a neural net, I can at least now understand a lot of what they're talking about. For instance, I found a new set of researchers to read up on - there's a Swiss AI lab that's won 5 past international machine learning competitions. There are two things I want to learn about that they are doing.
1. Long term / Short term Memory with recurring neural nets. They link the layers a bit differently to make a sort of logic gate which remembers or forgets based on relevance.
2. Hierarchical neural nets. I didn't see any links to this as I scanned through the research links on their page, but I did see references to it in articles. Somehow, they stack trained neural nets to have a more comprehensive net that understands more.

Here Yann Lecun's MNIST data set: http://yann.lecun.com/exdb/mnist/

And here's another C implementation - this one also uses for loops instead of matrices: http://www.cs.bham.ac.uk/~jxb/NN/nn.html I'm working on the last piece of the backprop right now. The final questions I have are on step size and learning rate. Apparently, it's best if you adjust them smaller as the net starts to settle on a solution.



----------
Post from 1 May, 2016:
https://gym.openai.com/docs

Interesting. Facebook lets me post some links, but not others. For instance, Friday I gave up trying to post about that morning's announcement by Movidius about the new USB stick deep learning accelerator they had developed. The FB form wouldn't accept the link as far as copy/pasting it.

In any case, OpenAI is Elon Musk's company, supposedly open sourcing AI. They have indeed made tools available, but they are also still open to patenting anything they get their hands on. So beware giving them too much or getting too invested in that toolset.

-----------
26 April, 2016:
Machine learning on the RPi: https://scientistnobee.wordpress.com/2014/06/20/machine-learning-with-raspberry-pi/
Found a link which explains a few questions I had on backprop. https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
Combine that with this for a straight forward / simple C implementation in code of what the guy is talking about in that blog post (it'll answer any remaining questions): http://www.cs.bham.ac.uk/~jxb/NN/nn.html
-----------
23 April, 2016:
This is a blog post from Adobe back in 2011. I saw a link to it on the comments for a CSI related article. Basically, if you take a lot of pictures of a static scene, you can get more resolution. If anything is moving, it won't work for that thing. I'll have to dig further to see what algorithms are used.
https://blogs.adobe.com/photoshop/2011/10/behind-all-the-buzz-deblur-sneak-peek.html
----------
20 April, 2016:
This is kind of neat. Leaves out integrals, which is a whole different beast:
https://www.quora.com/What-is-the-correct-algorithm-to-perform-differentiation-using-a-computer-program-for-any-function-entered-by-the-user/answer/Prashant-Sharma-12?srid=uOUuv&share=5c81d9a3
---------
16 April, 2016:
Wow, I've never been so close to completing this neural net that's been stuck in my caw for so long. https://github.com/DiginessForever/randomCode

Still working on backprop, but luckily, there's this really smart high schooler on Youtube who actually uses legible notation and has coded a passable implementation (most professors don't seem to code, so I end up seeing a lot of really hard to understand C code wrote by random people / doesn't go with whatever lecture it is I'm looking at): https://github.com/SullyCh…/FFANN/blob/master/cpps/FFANN.cpp

That, combined with a ton of other Youtube tutorials, makes it look like I'll push my brain over the ledge.

https://www.youtube.com/watch?v=gl3lfL-g5mA
--------------
16 April 2016:
Just got back from the physics department at WashU in Saint Louis. They have talks every Saturday in Oct, April, and sometimes into May. Today's talk was about Vision arising from Neurons. I am pretty sure Rupert Murdock was sitting on the other side.
-- The professor didn't talk about computer science neural net models, only the experiments and models they've been doing since 2013 when the BRAIN Iniative started. Apparently, they cut brains and eyes out of turtles and have it watch movies while they record the signals running down the nerves from the eye to the brain.

-- I did learn one thing new. There are weights in both the neurons themselves and in the axons - the area between the connections of one neuron to another. Also, again about compression from the retina to the signal going down the nerves from eye to the brain. It's apparently only about a million bits of information in pulses moving down that nerve, while there is much more info getting taken in by the three cone types (wavelengths) and rods (low light amplitude).

-- So definitely doesn't help me with backprop, and it feels like that community is around 60-70 years behind on models. I'm sure they'll catch up fast.
------------
12 April, 2016:
I checked out 123D Catch the other day. It's pretty smooth. You take about 15 pictures around an object and it turns them into a single 3d model of the object. It uses the phone gyro and compass to give you a map of which areas you've already taken and which you still need. I did one of my boots and got a very high quality fully textured model with no extraneous floating points. It's kind of like VisualSFM, only closed source black box solution. The models turn out better and the app is free, but if you wanted to use the models for a product, it's $10/month.

I really would like to have a completely non-cloud-based software solution that's pipelined so that I can move my phone around an object, have it grab a bunch of pictures, then crunch those later to automatically create a 3d model (perhaps when cell phones are even faster, it will be possible to do the data crunching/picture comparisons on-the-fly).
------------
3 April 2016:
https://www.youtube.com/watch?v=xsEdu6Xq6KU

Been doing more reading. Getting a little further, slowly. This answer on Quora gives a good entry point to each part of the process:
https://www.quora.com/What-is-feature-detector-descriptor-descriptor-extractor-and-matcher-in-computer-vision?share=1

Found this site as well: http://www.vlfeat.org/api/index.html


Another resource - OpenCV page: http://docs.opencv.org/…/feature_d…/feature_description.html

This subject is actually rather huge - I imagine that one day we'll have inverse graphics cards - since computer vision is kind of like doing everything a graphics card currently does, but backwards (with a lot more processing involved).

-- This guy has some great videos - this one explains feature descriptors - I'm not done watching it yet: https://www.youtube.com/watch?v=oFVexhcltzE . I got sidetracked onto Kahn Academy to learn about Laplace transforms. Apparently, the math involved in making a feature descriptor scale invariant (can detect if it's the same object regardless of how big or small / near or far away it is) depends on Laplace.

-- The Harris Corner detector is rotation invariant, and then Laplace is somehow used to make it scale invariant (found this too: https://www.youtube.com/watch?v=NPcMS49V5hg ). The two are combined to fully match two features between images from different cameras of the same scene. Once features are matched, you can do trig to find the distance (z-coord), giving you a point cloud.

-- Apparently, if you use SIFT, you have to pay royalties (University of British Columbia owns it). The professor did an awesome job. Now, however, there is another algorithm which outperforms it and has an open license: Kaze. I'm going to have to compare the two algorithms to see if Kaze does everything I need.

---------
29 March, 2016:
Saw something interesting recently. Apparently, my difficulty in progressing from edge detectors to marking more general features in images had to do with terminology in the field.
A feature detector is an algorithm that makes things stand out more, like edges or corners.
A feature extractor goes more in depth.
---------
21 March, 2016:
Thoughts on using shaders, OpenCL, or CUDA: if you consider that the typical 2.5GHz single core processor has 10 GFLOPS (floating point operations per second), in relation to a graphics card, the graphics card usually has more. For instance, the Raspberry Pi has a low powered / low-end graphics card in it. That GPU (graphics card) has 25 GFLOPS. If you can use both the processor and the GPU at the same time, then you can obviously add the two together. That would mean even really low end computer like the RPi could be blazingly fast.
---------
(political, non-technical): 5 Mar, 2016:
The comments on this ArsTechnica article are great: http://arstechnica.com/information-technology/2016/03/dod-officials-say-autonomous-killing-machines-deserve-a-look/?comments=1&start=40

There's a comment with a link to a story about a war between two chimpanzee communities studied by Jane Goodall (basically saying that violence is part of our nature): https://en.wikipedia.org/wiki/Gombe_Chimpanzee_War

One thing that most strikes me is this idea of a "moral compass" in software. I think what people don't realize is that in order to do that, because computers are just math devices, any technique of giving a program/algorithm morals will actually just be assigning a set value to human life using some equation.

The reasons why I have not volunteered to help these DoD officials are:
1. The open letter regarding autonomous killing machines.
2. The issues of trust regarding all the lies around 9/11 (foundation for "The War on Terror") and the kind of decisions they've been making with their current program (the absolutely abysmal rate of innocents/not-innocents killed + deliberate killing of underaged US citizen).
3. Their complete lack of any desire to use said technology for good (I was denied on submittal of idea concerning research project for using machine learning for USDA/DoD agricultural partnership).

-------- 25 Feb, 2016:
Saw a post on FB on the Backyard Brains page about a book:
https://www.amazon.com/Neuroscience-Dummies-Frank-Amthor/dp/1118086864/181-6136217-7332508?ie=UTF8&SubscriptionId=AKIAILSHYYTFIVPWUY6Q&camp=2025&creative=165953&creativeASIN=1118086864

Some other material I picked up off the Backyard Brain's FB page: 1. http://www.brainfacts.org/about-neuroscience/brain-facts-book/
2. https://backyardbrains.com/experiments/EOG
3. From Kevin Crosby's comment: "Here's a history of brain implant technology I compiled. An early draft was credited in 2005 as the basis for the Wikipedia article on the subject, and back in 1997 the Department of Defense ordered me to stand down when I tried to discuss it". http://skewsme.com/tinfoilhat/chapter/brain-implants/


---------

17 Feb, 2016:

I don't remember how I came acros this, but combined with software defined radio and a little hacking, I might be able to do something like this myself...I still have that parallella with the onboard FPGA and 2 ARM CPUs. The biggest challenge is dealing with the extremely high frequency (2.4Ghz is 2.4 billion cycles per second). I really liked what he said about having arrays of transmitters (phased arrays) to very quickly aim the radar in an arbitrary direction. I'd much rather have that than an antennae cone.
https://www.youtube.com/watch?v=ztR9mdJ1YWU

One of the reasons that Google has been so succesful with their self driving cars is that they don't only rely on computer vision, they also have onboard LIDAR. It'd be really neat to get a point cloud from a very cheap set of hardware and overlay images from a cheap webcam on top of that for further object classification.

---------

Using machine learning to evolve muscles / bone (making the foundations for a robotic walker):
https://www.youtube.com/watch?v=z9ptOeByLA4&feature=youtu.be
keywords: soft robotics, morphology, paper: "Flexible Muscle-Based Locomation for Bipedal Creatures"

--------

1 Feb, 2016:

I was doing a search on Google and Bing for "sort point cloud rotation invariant", and I found this site: http://www.openperception.org/ - the point cloud API is here: http://www.pointclouds.org/ Still learning about computer vision, but slowly converging on an overall understanding of the process. The more I understand, the more APIs and communities I find, which is cool.

--------

29 Jan, 2016: Microsoft just open sourced their neural net toolkit. Seems better documented / more familiar than Facebook's or Google's. https://github.com/Microsoft/CNTK/wiki/Examples

--------

12 Jan, 2016: This is a pretty good post explaining the need for robotics. In the past, I've thought that robots, on the whole, will take away needed jobs, but in fact, there are too many jobs which cannot pay enough, but which we need filled:
https://medium.com/@gerkey/looking-forward-to-the-robot-economy-1ba4ee1647e3#.jt5mfngv0


--------

29 Dec, 2015: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

--------

24 November 2015:
Best video I've ever seen on neural net construction (Dave Miller):
https://www.youtube.com/watch?v=KkwX7FkLfug

-------

7 Oct, 2015:
All,

I need some help. I'm submitting an idea for the upcoming Airmen Powered by Innovation summit in December. However, I don't know anyone in the DoD or directly working for government who is a computer vision researcher. I know of people working for DARPA who do this, and there are a lot working for universities. I can put together a team for the presentation, provided they work for DoD / government.

Here's my Innovation Summit submission.

I don't think I want to do it alone - while I have a lot of strengths, this area definitely stretches quite a bit past my current skill level. I would say that it would represent possibly max potential.



-------

16 Sep, 2015: I just bought a copy of Practical Python and OpenCV! https://www.pyimagesearch.com/practical-python-opencv/ @pyimagesearch

It was $70, but worth it, as in the quickstart bundle, Adrian Rosebrock created a Ubuntu image for VirtualBox virtual machine. That makes it unnecessary to spend all the time configuring an extra OS, installing Python 3 (new version) and OpenCV 3. It also has a 270 page book with code + case studies and a bunch of videos. I'm sure the image will save me more time in the future, as I always wreck my OSs.

I've installed OpenCV in the past, and it's a pain getting everything set up initially. However, python and OpenCV are probably the best way to study computer vision. I think having an image of a pre-configured OS all set up and ready to go is an excellent idea.

3 Sep, 2015:
www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA092604
This is an early project in computer vision, the report was published in 1980. The project was done at Stanford. They programmed it in BASIC. Pages 21 and 22 demonstrate some serious math fu. This project has everything I'm interested in - structure from motion, estimating a 3D world model, and pathing.

-------

13 Aug, 2015:
Not directly computer vision related, but video codecs are related to how much compression you get and how much overhead it takes to transfer the video files. https://yro.slashdot.org/story/15/08/11/2327221/cisco-developing-royalty-free-video-codec-thor

-------

22 Jul, 2015:
My filter bubble has apparently started to include a lot more to do with computer vision. I've come across references to Halide now three times in various places, related videos on Youtube, and now a mention from a recent presentation by the Khronos Group about OpenCL. Halide apparently is a computer language specifically for computer vision. I might have to check it out.

On another note, apparently the Parallella's Epiphany FPGA processor will soon have an OpenCL API release.
An API (Application Program Interface) is basically just a group of functions/methods you can run from a program you make. You include/import at the top of the program, and then can run them at any point in your code. OpenCL is a standard that allows you to do computation on both CPUs and GPUs (graphics card processors).


-------

20 Jul, 2015: A sub/sub bot combo found a Napoleonic era shipwreck by accident (weren't specifically looking for that wreck): http://www.theregister.co.uk/2015/07/18/shipwreck_discovery_sonar_auv_north_carolina/

-------

19 July, 2015: Got OpenCV 3.0.0 installed on Slax finally. This is a good set of instructions, the only difference for Slax was that there's no ld.so.conf.d directory, only a ld.so.conf file, so that one line "/usr/local/lib" goes at the end of that file instead:

http://webcache.googleusercontent.com/search?q=cache:2vOqPlUYfNoJ:www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/+&cd=1&hl=en&ct=clnk&gl=us

That version of OpenCV just came out last month. Apparently they have a lot of the processes automatically using both the GPU and CPU, speeding them up. Also, for accessing video streaming/video files, you don't need the separate dependencies.

Comments: So, I've gotten basic wavelets down. New goals - learn about homogenous coordinates and basis functions in matrices. Learn OpenGL, become a lot more familiar with SIFT and SURF computer vision algorithms. -- https://www.youtube.com/watch?v=oFVexhcltzE

-------

19 Jul, 2015:
This is awesome - automatic machine vision correction of video when you're on a videochat - makes it always look like you're looking the person on the other side in the eye.
https://www.youtube.com/watch?v=A5QlDfBpNxw

-------

19 Jul, 2015:
https://www.youtube.com/watch?v=Y9K2yeBZS9I
Found this video on Youtube - has exactly how to do a 2D Haar wavelet transform step by step.

-------

17 Jul, 2015:
Some more pictures and the transformed version.

Now to do some more color conversions, then apply a DFT.


-------

17 Jul, 2015:
Coming along - been playing with code that takes in images and then graphs them in 3 dimensions. Finally have some that works decently after making every mistake in the book.
Comment: (me) That picture is a girl with a world map painted on her face in many different colors. I've copied a piece of code (stack exchange) that maps the colors from RGB onto a jet color space. I still haven't gotten a good install for OpenCV on Slax yet, so I'm using just numpy and matplotlib.

-------

14 Jul, 2015: Software that does this is so neat. My favorite so far is Visual SFM. You take a bunch of pictures around an object / scene and put them in a folder (they need to be in JPG format), run the software, and it takes awhile to construct a 3D point cloud. You then use something like Meshlab and a modelling program like Blender to get a 3D object. https://stackoverflow.com/questions/7705377/how-to-create-3d-model-from-2d-image

-------

19 Mar, 2015:
Cool - Adapteva has their 16 core computer on Amazon now for pretty cheap. http://www.amazon.com/Adapteva-Parallella-16-…/…/ref=sr_1_1… This was a Kickstarter project a short while back (months? can't remember exactly when). It's about $150 and pretty dang small. It looks about RPI size, though I can't tell exactly.

Building a Modern Computer from First Principles / Computer Architecture

Finished project 1 of the Coursera course "Build a Modern Computer from First Principles: From Nand to Tetris". Made 15 logic circuits given just nand. Next up - designing an ALU. Been looking at Voxel.js to go along with the Three.js javascript coding I've been doing. It's basically moving toward a minecraft like game in the browser. The server is done with Node.js (I'm still learning about that). The cool thing about all the above is that if I get enough modules together, I can duplicate what that guy did with the Minecraft computer (which was inspired and educated by the same class I'm taking - mentioned above), and put it all out as a self contained javascript set of modules, then use it for my first project for FreeCodeCamp. Here are the chips I've written on my github repo so far for this course: https://github.com/DiginessForever/randomCode/tree/master/LogicGateHDL ----------------------- From earlier: This course is happening soon: "Build a Modern Computer from First Principles: From Nand to Tetris (Project Centered Course)", hosted on Coursera, originated from Hebrew University of Jerusalem. I highly recommend it. I have an old work acquintance who had a course that was based on the book ("The Elements of Computing Systems: Building a Modern Computer from First Principles") (the second professor listed on the course page, Noam Nissan wrote the book). The acquintance was pushing me again on it, which got me to make it through the boolean logic section - it also guides you through making a compiler and I think teaches you about computer hardware architecture at the electronics level, as well as goes up the logic stack from assembler to higher level languages. I learned quite a bit from this book, even though I have not finished it yet or even gotten halfway through. My digital electronics class at SWIC also got into boolean logic / logic gates, but this book takes it to the next level. The course ought to be quite good. https://www.coursera.org/learn/build-a-computer?recoOrder=3&utm_medium=email&utm_source=recommendations&utm_campaign=recommendationsEmail~recs_email_2016_05_22_17%3A57 Here is the original companion website: http://www.nand2tetris.org/, just in case anyone is interested but doesn't have the time in the Coursera course's time frame. The original book itself, I could not find on Amazon, but here it is at MIT Press: https://mitpress.mit.edu/books/elements-computing-systems Here is the other author of the book describing its purpose in a TED talk: http://www.ted.com/talks/shimon_schocken_the_self_organizing_computer_course I think in my upcoming project as part of the FreeCodeCamp map, I will create a webpage with my CV, and this will be one of the books I list as being something I recommend (along with Who Is Fourier and a few others). -------------------- 28 Feb, 2016: Wow, found this video showing one of the first mechanical computers. Someone in the 1800s designed it to do addition and multiplication using brass gears. Even better, I noticed the guy introducing it in the video is Clifford Stoll. He wrote of his experience tracking one of the first computer hackers in Cuckoo's Egg. http://www.cnet.com/news/watch-1895s-millionaire-machine-do-some-astounding-mechanical-calculations/ ------------------- 3 Jan, 2016: This project is cool because it shows a basic computer in an IC, which you can wired up in a protoboard. Also, it shows a wiring diagram for hooking up to VGA. It's kind of expensive for an IC ($25), but considering it's actually a tiny computer...Here's the company's site for the IC: http://www.espruino.com/ http://www.instructables.com/id/Make-Your-Own-Home-Computer/ ------------------ 25 Mar, 2015: My electronics class teacher broke down diodes and transistors last night - it was pretty neat, he talked about doping levels and showed each connection's bias direction in relation to what the current was at the gate (or voltage for field effect transistors). He also talked about the history of the transistor invention process and how the University of Illinois was involved. I still need to catch up on homework and then do a review of a lot of the reading for this class. Comments: Gabrielle - Doping levels? Explain? Me: OK, so a transistor is actually two diodes. A diode keeps electrical current running in only one direction in a circuit (positive to negative or negative to positive). An alternating current goes one direction, then the other. On a graph, that looks like a sine wave, when it's positive, it's going negative to positive, and when it's negative, it's going positive to negative. That's the direction the current is moving (a bunch of little electrons - they're not really moving, they're exchanging their energy). Diodes are used to keep current going only one way by blocking it when it's trying to go the other way. So in the simplest case, they are filters. Building on that, you can arrange them in a way that they can amplify signals (filtering part of the signal while adding two signals together when they are going a certain way). You can also use them to only pass current when it's a certain strength. Combine that with a capacitor, and you can make an oscillator (clock signal, how fast it pulses is based on the capacitor). Put a bunch of the diodes together and you can start to do things like logic (logic gates) that are more complicated where you can do basic value comparisons. So what is a diode? It's just two pieces of doped silicon up against each other with wires leading to both ends - the current will have to try to go through the junction (it'll be a P-N junction, getting to that). What is doping? Well, it's when you take pure silicon (which has an extremely high resistance) and mix other material types with it that bring its resistance down (it'll be either a P type material or N type - both are ions in that they have an extra electron (N type) or are missing an electron (P type) in their atoms' valence bands - you can actually see which atoms will have these properties just by looking at a table of elements). Mixing these materials with silicon gives you a wafer that's an N type or P type. If you put these wafers together, you have a P-N junction, or a diode. When you link a circuit to the junction, with one wire going in on the N side and one going to the P side, then try to run current through it, if you have current trying to go in the N side, it's not going to like it because the N side already has all the electrons it can take. It'll resist strongly. However, if you have current going in to the P side, it'll allow it through, and not only that, but because the P side is pulling electrons toward it from the N side, you'll get a ripple effect where the energy flows all the way through the whole junction. Me: http://www.circuitstoday.com/understanding-the-pn-junction Me: What I wrote above was a simplified description of how the PN junction works - as you get more in detail, you'll see that current won't flow until you put a certain amount of voltage on it, and if you put too much voltage on it, you'll burn it up. Each diode type has its own characteristic curve (volts to current) that tells you how much you need to pass current and how much will destroy the diode. It has a different curve based on whether you are trying to make the current go through the P or N junction first (reverse or forward biased).

Basic tech instructions:

1. You can run a webserver on your local computer with no setup involved if you have Python installed - just run this command from the command line / terminal (from the folder you want the web server to serve up files from):
python -m SimpleHTTPServer 80
Where 80 is the port number. 80 is the normal web browsing port, but you can use other numbers if you want.
(10 Jan, 2016): I dont know how many people following this use linux, but if you have Python installed, you can start a very simple web server with one command: http://www.linuxjournal.com/content/tech-tip-really-simple-http-server-python

Then if you want to share it outside your network, it's just a matter of going into your router and setting up a port mapping (possibly also messing with your OS firewall rules).

I think I might do this - I've posted some really cool stuff on FB, but its too hard to find a specific thing in my history. I need to keep all the project stuff in one place.

(Note python-weboob: library for making a web browser from scratch...dad mentioned it, so I'll check it out later)

2. (5 March, 2016) One of the really cool things about open source is that there are sites where you can share your code. Most people use Github. Along with Github, there's a program called git which allows you to easily (once you get used to it) get a copy of any repository (the name for a project/folder of code) on Github (most people leave them public because you have to pay to make it private, at least on Github, although git does support you having your code wherever you want it).
So now I'm setting up a Github repository under my user ID DiginessForever called randomCode (for bits and pieces that are useful - will make more repositories for specialized purposes). So, the url will be https://github.com/DiginessForever/randomCode
-- One weird thing about git is that a "pull request" is actually when you send your code back up to a repository after making changes - then whoever is admin will use the merge tool to check your changes, and if they like them, pull them into the master branch. Git is probably one of the best source control programs to learn.
-- This is a tutorial on how to use Github's web interface to manage a repository on the site. You would still have to learn a few commands for your git client (GUI or command line/terminal - I use Linux terminal) https://guides.github.com/activities/hello-world/
-- Relevant tutorial commands (commands for terminal in quotes, but don't include outer quotes when entering commands: 1. Make a new file, edit it / save the changes. 2. Enter this command from the folder you want to be the repository: "git init". 3. "Git add ". 4. "Git commit -m "". 5. (only have to do this once) "git remote add origin https://www.github.com/DiginessForever/randomCode.git" (or whatever the git master repository url is). 6. Commit your changes: "git push -u origin master". Note: When I changed a file, I had to add it again in order to be able to push.
-- In closing, I'd like to point out that if you ever program anything, just take my advice - a source control program like git is absolutely necessary, don't try to do a project without it. Get started with source control early - keep all your work forever (and be able to see every change / whole history of file). Keep basic functionality out at a public site. That way, no matter what, as you switch jobs, you have your foundation that you take with you (can quickly build on top / don't have to recreate the wheel).

3. Walmart has released a tool called OneOps. Basically, a company can have an internal cloud / run this tool on top of it (lets you start up virtual machines without worrying about which computers are up or down). It also runs on a bunch of different cloud providers, so you can scale up really fast.
http://arstechnica.com/information-technology/2016/01/a-new-open-source-cloud-management-tool-from-walmart/
4. PapaParse provides a nice way to import comma separated files and automatically put it into an array with key-value pairs.
The keys are the words in the first row of the file. Summing up all the values in a column (if they are numbers) is as easy as a for loop using the loop counter and the name of the column header.
https://github.com/DiginessForever/randomCode/blob/master/Javascript/IPsort.html
5. I tried out TestDisk last night (a Linux data recovery tool). It's included in the Gparted Live CD (Gparted is a tool that lets you repartition your hard disks, and a Live CD lets you boot directly into it and run off the USB or CD so you're not relying on the operating system on disk).

Lessons learned:
a. You can often get back deleted files that aren't written over - seems to be about 50% effective. I think the tool is looking for old partition table records, then finding the files based on those. I recovered a few old files from a long time ago (Windows app installers). I did NOT recover my PGP key. I think that might have been because the partition record for that one may have been overwritten by one of my newer partition tables.

b. Gparted Live CD does not include Photorec - which would have been my next attempted utility. That one tries to find files based on "signatures". I haven't tried used it yet.

c. It took about an hour to do a deep scan on a 16GB partition.

16 Jul, 2015:
Houston, FPGA is a go. Adaptec Parallella has arrived. Burning SDcard now...hehe
Wow, they have code examples for the Epiphany chip from the US Army Research Lab.
This has the instructions for making / installing the sdcard image: https://github.com/parallella/pubuntu
This is so cool: "The codes were developed by the US Army Research Laboratory, Computational Sciences Division, Computing Architectures Branch." https://github.com/parallella/mpi-epiphany
Looks like the current Ubuntu setup for it isn't quite complete. Lots of configuration left to do. It's new from the last time I checked the site though.

-------- 11 Apr, 2015:
Loading Tails OS:
Well, got the opportunity to create another boot disk, this time a live USB for the Tails OS. It auto uses Tor in addition to spoofing mac address and having other encryption such as TrueCrypt and OpenPGP (standard on Linux). Funny part of it is that it fakes the look of Windows 8 as far as the desktop appearance goes if you select that option on bootup. It's quite a bit slower running it on the USB. I'm not completely certain that the security benefit of it being on USB only is worth the trade-off in speed, but it's nice to know that it's completely portable / runs from just what it has on that.

The install was very easy - just download the OS image from the distro site, then open a terminal and type a couple commands:

1. "sudo umount /dev/sdb"
2. "sudo dd if=/home//Downloads/image_file.iso of=/dev/sdb"


It copies from the downloads folder to the USB. When done, just type "sudo shutdown -r now" to restart immediately. It boots up in the Tails OS.

--I'm kind of embarrassed it took me so long to get a computer back up and running after I had the issue with my last OS instance. My wife's computer also had OS problems due to a bug in Microsoft's update process, so hers is dead right now too. Goodbye Windows! I think she'll really like Linux after the initial slight learning curve. So much less in the way of problems.

-- You'll have to change the above two things about command #2 above: your username (where is), and "image_file.iso" to match the name of the OS image you downloaded. Typically when you plug in a USB flash drive, it comes up on /dev/sdb - if it's got partitions, then those show up at /dev/sdb1, /dev/sdb2, etc. Those won't matter because when you use dd (disk dupe), it copies over everything, so no more partitions.

--------

8 Feb, 2015:
The question of how to handle data backups has been asked so many times on Slashdot. Just saw this on the LinuxJournal website - it's a perfect answer to the question.

http://zbackup.org/

The reason for this is that it handles full and incremental backups easily. You'd still have to make sure that your hardware is fault tolerant (RAID or keeping two copies of the backup to compare), as screwing up a bit of data could make the restore fail.
One of the reasons I have started playing with Three.js is because Microsoft has bought Minecraft. It no longer works for me on Linux as they have disabled it from the login portion of the code. So after buying the game, they screwed over prior customers using Linux. There is absolutely no reason why a program written in Java should cease working on any operating system, so that was a targeted action. So now I am interested in making my own version, as Minecraft is one of my favorite games. Here's a list of 10 clones for inspiration: https://levelskip.com/simulation/10-Totally-Free-Games-like-Minecraft If this is not in the above list, this might be fun to check out as well: http://alternativeto.net/software/unturned/

Three.js and Blender

I have been experimenting with Blender, following a tutorial to create a capital ship for a sci fi 3D game. Also, I have been following tutorials on using Three.js to display 3d scenes in the browser using WebGL/HTML5. So far, I have the object I have created importing into the Three.js scene, but without textures. Three.js is still in early development and does not have full texture loading support for JSON object format. https://github.com/DiginessForever/randomCode/tree/master/Javascript/threeJSplayground I also have the MakeHuman Blender plugin installed. It does not offer sliders to change attributes / only offers the default male/female models. I need to learn how to do rigging for animation in game. ---------------- Earlier explorations with Three.js and surveying voxel.js docs / code (22 May 2016): So I'm almost to the same point this guy is at (minus the attaching a bunch of blocks together and removing all hidden vertices). How cool would it be to have a minecraft-like game in the browser running from javascript? I probably have a lot to learn from this community - haven't used Node.js yet, but have heard quite a bit about it. Not sure how I feel yet about their package control with npm, browserify, and beefy, with it downloading the latest version every time you reload the page. Seems inefficient / wasteful and makes a lot of assumptions about availability of the servers in order to work. http://voxeljs.com/ Note: If I understand correctly, a voxel is a 3D cube. Don't let it confuse you when he talks about triangles / not using those yet - the 3D cube is made up of 6 square faces, each of which is two triangles, each triangle being 3 vertices. I have a bit more to learn about the model file formats - have found them a bit confusing because they have redundant information (probably to enable better sorting / indexability). It took me a bit of playing with code (in three.js) to be able to change the color of the square faces. I had to load images into an array, then create the mesh (3D object with textures applied) by giving it that image array. Here's my latest code: https://github.com/DiginessForever/ Just grab threeJSplayground2.html, three.min.js, and THREEx.KeyboardState.js and put them all in the same folder and open the html file in your browser. I do need to figure out why it's not working for me on my laptop in Firefox while it worked on another computer in IE. (With the inclusion of that THREEx.KeyboardState.js, handling the keyboard is a simple if statement for each key, then put the action you want to happen inside the if.) Next thing for me to experiment with is making "chunks" (basically set up a parent 3D cube and start adding a bunch of child cubes). Each chunk in Minecraft is a bunch of cubes, 16x16x however far up and down your world goes. Not sure exactly how MC handles hidden cubes. I also need to minimize the size of the images I'm putting on the voxels - loading is too slow with the image sizes I'm using now. Other than that - terrain and skyboxes (auto terrain generation, and making the skybox look good by stretching its image properly). http://voxelmetaverse.com/ http://humaan.com/web-3d-graphics-using-three-js/ ---------- 5 March 2016: I've hacked my way through the first coherent WebGL tutorial I could find. I've simplified the code there and it's easier to look at. Got my first triangle and square in the browser -- Had to look through the comments for hints on problems with the code, there's two lines that have to change for it to work: http://learningwebgl.com/blog/?p=28

Leaflet web mapping page

Status of current webmapping software project:
Have simple example mostly working (still needs custom markers), including ability to add custom locations to map from file (geoJSON - still need to work on conversion for csv to geoJSON if I want to use csv format).

https://github.com/DiginessForever/randomCode/tree/master/Javascript/LeafletMapping/Example1

I asked following two Stack Overflow questions and was helped to get past some obstacles along the way:
1. http://stackoverflow.com/questions/38044652/leaflet-i-cannot-seem-to-get-the-basic-examples-working
2. http://l.facebook.com/l.php?u=http%3A%2F%2Fstackoverflow.com%2Fquestions%2F38114333%2Fgetting-not-well-formed-on-geojson-during-parse-of-local-file-in-firefox&h=7AQFj9_kZ

TO DO:
1. Custom markers
2. Layer control.
3. Locate feeds from different agencies and/or community groups / get people involved in making location feeds (teach them to use it or give them the page source so they can host their own).
4. Experiment with downloading the tiles for a given area (city or county) and loading the maps from the local machine's file system.
5. Possibility (with much work) to enable route mapping: https://github.com/Project-OSRM/osrm-backend/wiki https://www.mapbox.com/blog/drive/


Overall, the webpage works well, first does geolocation, then shows the map where you are along with a scaling control (zoom).

---------------------

Post from 12 June 2016: https://www.mapbox.com/blog/get-started-mapillary/

So before the OpenDataSTL meeting on Tuesday, I'm going to try to get that set of code from the last meetup (well, more like that whole day prior) working. At the end of this, I should have a working template for having any website interacting with data layers (arbitrary locations with arbitrary descriptions over the map, selectable by layer from a control on the page). I think I might also download most of the map tiles for Saint Louis and have them ready for hosting via a group of community servers. Getting used to working with the tiles / hosting directly should also help with using the same kind of setup for imaginary worlds (as you walk through a video game world, your script grabs the right tile and displays it).
I think once that's done, it'll be ready for engagement by the community at large here - data acquisition / aggregation from a ton of feeds.

This seems to be some finishing touches - looks like they're thinking ahead. https://www.mapbox.com/blog/drive/

---------------------

21 Jan, 2016: Late New Years resolution (these have to be easy, as it's not really a way to make long term goals): This year I am going to learn how to make a really good looking web site by fully mastering CSS.
Commments: Me: I'm pretty good at making databases, writing SQL / doing backend code, but I cannot say that my user interfaces have been anything close to good looking. This year I will change that. I'm using javascript right now at work, playing with Chart.js and PapaParse.
Jason Peddicord: Haven't played with any of them. I use a couple different libraries. Bootstrap is a great place to start for CSS and even JavaScript frameworks. Handlebars is a good JavaScript formatting framework. jQuery is awesome for navigating and manipulating the DOM. D3 is great for charting and graphical data analysis or metric display.


---------------------