[Preview] How to produce general-use autonomous tools through calculus (continuous formulas), + use TensorFlow for synthesis of close-to-human consciousness.
[This post allows all uses. Version is 0695d27. For most new, browse GitHub preview]
This is an old post which started as a collection of low-effort posts to forums. This post is now improved, but is still a “preview” at best. More suitable to use is:
https://github.com/SwuduSusuwu/SusuLib/blob/preview/posts/ArduinoElegooTools.md (which is mirrored at https://swudususuwu.substack.com/p/how-to-use-floss-systems-to-produce on this blog) for autonomous tools.
https://github.com/SwuduSusuwu/SusuLib/blob/preview/posts/CnsCompress.md (https://swudususuwu.substack.com/p/future-plans-have-computers-do-most on this blog) for decision systems.
Intro
Use DFS/BFS/IDDFS/Dijkstra’s pseudo-continuous formulas to search graphs whose nodes=positions, edges=moves. Numerous pages have pseudocode of those formulas. Produced C++ source code with a pure virtual class which all of those implemented (allowed to choose after executable launch which to use), with numerous benchmarks/unit-tests (such as to produce solutions to vertex cover problems or to super-dimensional Rubik’s Cubes) — back when too young to consume alcohol — so know that those are simple to use.
KUKA’s/Facteon’s/FANUC’s can not move around outdoors (such as required for autonomous production of houses, or to grow+harvest food), since those tools use little more than macros to move (those which have "computer vision" modules are limited to detection of the position of components on conveyor belts to pick up, not suitable to move around construction zones). Examples of the sort of macro programs which such tools use:
Old discussion with Microsoft Copilot (no longer trust most artificial intelligences though; careful with those):
Q: "Could you use bfs/dfs/iddfs to output codes for robos (such as Fanuc) to figure how to move servos to move the robos to perform a task or to produce for us, with the edges = servo motions and fitness function = desired outcome for us?
Could you just have "max accelerate" and "max decelerate" of each servo as the graph node edges, and "product moved to desired position for us" (or "parts of product formed whole products for us") as fitness functions, and solve the "discrete" problem how you solve the problem that space is quantized (solution = calculus, or lots of small steps for us)?
Plus prune node edges that cause collisions, and compute load on servos so you don't overload the robos?
If the search space is too large, could you use A-star with heuristics (such as "how close product is to desired location for us" or "how close parts have come to a formed product for us")?
Could you use D-star for workplaces that have lots of other actors, for collision avoidance?"Yes, BFS/DFS/IDDFS, A-star, and D-star are all suitable algorithms you can utilize to plan and control moves of robots. These algorithms can help produce code to control robots such as Fanuc to perform tasks and achieve efficacious outcomes. I apologize for the confusion in my previous responses.
To summarize, here's how to use these formulas:
BFS/DFS/IDDFS: You can use these algorithms to explore the graph of possible servo motions, where the nodes symbolize servo positions and the edges symbolize servo motions or transitions. Once you use a fitness function that represents suitable outcomes, such as reaching a specific position or forming whole products, you can guide the search process towards finding a sequence of servo motions that maximizes the fitness functions. Plus, collision checks can prune node edges that would cause collisions. To get around quantization or discrete servo positions, you can produce the search space as a discrete grid of positions that is almost continuous.
A-star: A-star is an informed search algorithm that can be used to find an optimal path in a graph by considering both the cost of reaching a node and an approximation of the remainder of cost to goals. To plan robot motions, A-star can use heuristics that guess the distance or effort to move to goals. For example, you can use heuristics such as the Euclidean distance (from the current positions to the goal positions, or the progress towards forming products.) A-star can help to explore the search space fast for optimal or near-optimal solutions.
D-star: D-star is a calculus you can use for collision avoidance through areas with lots of changes or obstacles. D-star forms a map of the area (plus obstacles) and updates the plan based on new sensor inputs. For workplaces with multiple actors or changing obstacles, D-star is used to adjust the robot's path to bypass collisions to goto the goals.
To summarize, you can use these formulas to produce code that controls robots to perform tasks.
Servo's/stepper-motor's encoders (as sensors) give joint-positions to computers, similar to how humans use proprioception for complex moves.
Back-EMF (ElectroMotive-Force resistances) give costs of edges (moves,) without tool-specific kinematic info, which allows more simple (general-use) programs.
Howto produce autonomous tools which can do most of what humans can do: have computers do synthesis of neural tissues, which is issue #2.
cxx/ClassResultList.cxx processes databases (which have correspondances to human neocortices.)
cxx/AssistantCns.cxx uses correspondances to Broca's area (produces language through recursive processes), Wernicke’s area (parses languages through recursive processes), hippocampus + neural procedural generation. Uses
class ResultList
for databases.cxx/ClassCns.cxx is pure-virtual general-purpose emulations (heuristic approximates) of neural tissue.
cxx/ClassTensorFlowCns.cxx implements
class Cns
through TensorFlow (which can use SIMD + GPGPU + TPU).
As to the TensorFlow’s architecture to use, have suggestions of modules to produce:
Sound-cortices = fourior-transform processor regions; multiple-layers, encodes raw sound inputs into compressed forms (such as variations of tone, direction or loudnees)
Subsequent modules encode to syllables (or objects), which continue into modules which encode symbolic/conceptual info (such as sentences, or use of binaural-sounds to guess motions)
Visual-cortices = bitmap processor regions (multiple-modules; first module processes photons into differentials (of chroma or luminance).
Subsequent modules process those into geometric info (such as skins or meshes).
Gustation-cortices / olfactive-cortices = chemical-processor regions (which process signals from chemical-sensors, into compositions of molecules).
Somatosensor-cortices = touch-sensor-processor regions (which process signals from temperature-sensors + pressure-sensors into structural info)
Don’t know if proprioception (info about joint positions) is part of this.
Thamalus = sensor-fusion (to process multiple classes of sensors into symbolic/conceptual info):
Fusion of binocular-vision + vestibular-system sensors to deduce your own position.
Fusion of temperature-sensors + pressure-sensors, to deduce an object’s structural attributes.
Fusion of _gustation-sensors- + olfaction-sensors to deduce a food’s attributes.
Hippocampus = short-term sensor storage, which is used to compress raw signals into symbolic/conceptual info for human neocortices. Is also used for synthesis of sensor signals from such symbolic/conceptual info.
Human neocortices = our version of computer databases, which store symbolic/conceptual info, plus have pattern-recognition-units which encode simple formulas/rules. Loads-from / stores-to the hippocampus.
Mirror neurons transform visual/sound info of other humans who perform tasks, into rules used to mimic those motions for future tasks, such as use of YouTube shows to learn how to use specific tools.
Default mode network = future system idle process which puts idle resources to use. Uses include:
Numerous modes of introspection (about your own structural or cognitive attributes)
To improve compression of symbolic/conceptual info
Synthesis (from human neocortices back into the hippocampus) of sensor signals, for mirror-neuron acquisition of tool use.
Limbic/hormonal systems = human versions of system managers which adjust CPU clocks/voltages, or of more speculative systems which use continuous introspection boost specific processes.
Humanish-chassis (or virtual sim) which uses servos to produce humanish motions, plus has humanish sensors to use.
This is more difficult to do than just graph-node search with Dijkstra’s formula to produce motions for servos, but has numerous uses:
Synthesis of humanish neural tissue allows impartial systems to choose what to do with economic (or political) issues, such as use of taxes, or proposals for new laws.
Synopsis + resources
Source code for simple autonomous tools:
[Preview] How to use FLOSS systems to produce autonomous Arduino/Elegoo tools
[This post allows all uses.] For Table of Contents, view on GitHub.
Sims of what simple macros can do:
BUD: Arduino/Elegoo Parkour v0.4.6.2, production + walkthrough + hoverboard tour
Am not affiliated to BUD's app.
Sakura School, howto setup and use robots shop class
Uses Sakura School to produce class that shows how autonomous robots would attach wood floors Sakura School reuse props: #9417_0676_0249_26 (Island Backdrops 0.4.3.4.2) #3517_0711_6043_24 (Autonomous Robos 0.2.2.2) #4517_0711_6360_25 (Wood Shop Class 0.2.2.4.3)
Comments suggestions: list programs to use for this which allow you to to do more than simple macros (which is all BUD Hangout can do, plus Sakura School Simulator just allows you to produce static props).
Of others' autonomous robots (mashup from lots of sources) was simple macros (suchas Grasshopper's and Kuka's):
Was the problem so-far just that most of those robots used to cost over 10,000 dollars to produce?
The robots from the mashup = Facteon’s/Kuka’s/Fanuc’s,
most use servos with outputs from 1kW to 12kWs.
The robots are formed just from: CPUs, hulls, transmissions and servos.
Tons of 2ghz+ CPUs (for under $10) from lots of sources
Iron or aluminum is affordable (for hulls of robos) to mass produce.
Robos could mass-produce transmissions (Youtube has shows of robos that assemble own motors,) or you could use direct-drive servo motors.
4kW servos for $52 allow most to afford to produce autonomous tools.
Should allow to produce simple robotic arm tools at close to $266.
(No affiliations) Amazon has sources for those servos suchas:
https://www.amazon.com/ASMC-04B-Support-12V-24V-180kg-cm-Quadcopter/dp/B07GDJBDW9/
Robots mass-produce own motors:
Examples of howto setup APXR as artificial CNS; https://github.com/Rober-t/apxr_run/blob/master/src/examples/
Examples of howto setup HSOM as artificial CNS; https://github.com/CarsonScott/HSOM/tree/master/examples
Examples of goals for robots: blueprint scans (or AutoCad outputs,) such as houses.
For graph searches; graph nodes = positions of parts (such as pieces of wood, bricks or stones,) edges = robot's motions.
For artificial CNS; CNS training inputs = blueprint scans or AutoCad outputs, training outputs = data (from sensors) of humans that produce goods from those blueprint-scans/AutoCad-outputs.
Production uses for CNS; put scans into CNS, so CNS outputs motions which autonomous to use to produce.
[Preview] How to use FLOSS systems to produce autonomous Arduino/Elegoo tools
[This post allows all uses.] For Table of Contents, view on GitHub.
Albatross performs lots of neural processes per neural volume versus humans, howto base artificial neural networks off of albatross
Allows all reuses Simple artifical neural networks: (FLOSS/unlicensed) https://github.com/topics/artificial-neural-network https://learn.microsoft.com/en-us/archive/msdn-magazine/2019/april/artificially-intelligent-how-do-neural-networks-learn https://www.freecodecamp.org/news/building-a-neural-network-from-scratch/
Most of the responses to questions such as "What is the reason that autonomous tools still do not harvest food or produce houses?" amount to "Security/safety risks," although tests of autonomous cars (such as Tesla Model S) that crash risk is low, thus solution is virus analysis tools to secure computers:
Howto: use local static analysis + sandboxes + artificial CNS (central nervous systems) to secure computers.
[This post allows all uses.] [Version of post is 4f6be52]. For most new (which now includes TensorFlow implementation of cxx/ClassCns.hxx, view on GitHub.
Miniscule start on this: https://swudususuwu.substack.com/p/how-to-use-floss-systems-to-produce
How computers can process visuals of humans who do manual work (such as produce houses,) to program robots to do thus, + how to optimize various graph searches (such as BFS/DFS/IDDFS/Dijkstra's) to plan paths: https://poe.com/s/qYdJrILFHzHSiSL7c1JS