I am a curious software engineer and casual body builder. I am intrigued by streams of expressions of strings of bits of 1's and 0's.
▽
▽
Notation 1 - Manifold Learning: ϕ(x,θ)⊤wϕ(x,θ)⊤w (Bengio et al)
Notation 2 - Supermanifold Learning: ϕ(x,θ,θ¯)⊤wϕ(x,θ,θ¯)⊤w (Jordan Bennett)
While traditional Deep Learning may involve manifold learning, my work concerns supermanifold learning instead, as briefly seen in the notation differences above.
▽
▽
I had recently created something called the “Supersymmetric Artificial Neural Network”: Supermathematics-and-Artificial-General-Intelligence
▽
▽
▽
▽
A little about another one of my publications, “Artificial Neural Networks for ‘kids’ ”:
Synposis:
“This book is for both ‘kids’, and experts! (This feat was not easy to pull off)
This short book contains what is probably the easiest, most intuitive fun tutorial of how to describe an artificial neural network from scratch. (This short book is a clever and enjoyable yet detailed guide, that doesn't "dumb down" the neural network literature)
This short book is a chance to understand the whole structure of an elementary, but powerful artificial neural network, just as well as you understand how to write your name.”
Amazon:
Amazon.com: Artificial Neural Networks for Kids: The easiest most intuitive neural network tutorial you’ll probably ever find. eBook: Jordan Bennett: Kindle Store
Free copy with equations that are nicely coloured differently than their surrounding text content (instead of equations with the same colouring as their surrounding text content)...on research gate:
https://www.researchgate.net/pub...
Free copy on quora:
Jordan Bennett's answer to What is the most intuitive explanation of artificial neural networks?
Nice youtube video:
Artificial neural networks for kids