Program general purpose robots as autonomous tools through calculus. Possible to produce general purpose robos as autonomous tools, + close-to-human consciousness
Kuka/Facteon/Fanuc can't work outdoors or as carpenters because Kuka/Facteon/Fanuc was just macros. Use DFS/BFS/IDDFS to run close to continuous functions as calculus
This post allows all uses.
Use DFS/BFS/IDDFS to run close to continuous functions as calculus
General artificial intelligence allows autonomous robots to produce complex goods
Possible to produce general purpose robos as autonomous tools, + close-to-human consciousness. Kuka/Facteon/Fanuc can't work outdoors or as carpenters because Kuka/Facteon/Fanuc was just macros. Use DFS/BFS/IDDFS to run close to continuous functions (as calculus.)
DFS/BFS/IDDFS pseudocodes/calculus: https://www.geeksforgeeks.org/iterative-deepening-searchids-iterative-deepening-depth-first-searchiddfs/
What problems to solve to have all goods (houses, vehicles, desktops, phones, foods) produced from autonomous robots for us?
Howto program general purpose robos autonomous:
Q: "Could you use bfs/dfs/iddfs to output codes for robos (such as Fanuc) to figure how to move servos to move the robos to perform a task or to produce for us, with the edges = servo motions and fitness function = desired outcome for us?
Could you just have "max accelerate" and "max decelerate" of each servo as the graph node edges, and "product moved to desired position for us" (or "parts of product formed whole products for us") as fitness functions, and solve the "discrete" problem how you solve the problem that space is quantized (solution = calculus, or lots of small steps for us)?
Plus prune node edges that cause collisions, and compute load on servos so you don't overload the robos?
If the search space is too large, could you use A-star with heuristics (such as "how close product is to desired location for us" or "how close parts have come to a formed product for us")?
Could you use D-star for workplaces that have lots of other actors, for collision avoidance?"Yes, BFS/DFS/IDDFS, A-star, and D-star are all suitable algorithms you can utilize to plan and control moves of robots. These algorithms can help produce code to control robots such as Fanuc to perform tasks and achieve efficacious outcomes. I apologize for the confusion in my previous responses.
To summarize, here's how to use these formulas:
BFS/DFS/IDDFS: You can use these algorithms to explore the graph of possible servo motions, where the nodes symbolize servo positions and the edges symbolize servo motions or transitions. Once you use a fitness function that represents suitable outcomes, such as reaching a specific position or forming whole products, you can guide the search process towards finding a sequence of servo motions that maximizes the fitness functions. Plus, collision checks can prune node edges that would cause collisions. To get around quantization or discrete servo positions, you can produce the search space as a discrete grid of positions that is almost continuous.
A-star: A-star is an informed search algorithm that can be used to find an optimal path in a graph by considering both the cost of reaching a node and an approximation of the remainder of cost to goals. To plan robot motions, A-star can use heuristics that guess the distance or effort to move to goals. For example, you can use heuristics such as the Euclidean distance (from the current positions to the goal positions, or the progress towards forming products.) A-star can help to explore the search space fast for optimal or near-optimal solutions.
D-star: D-star is a calculus you can use for collision avoidance through areas with lots of changes or obstacles. D-star forms a map of the area (plus obstacles) and updates the plan based on new sensor inputs. For workplaces with multiple actors or changing obstacles, D-star is used to adjust the robot's path to bypass collisions to goto the goals.
To summarize, you can use these formulas to produce code that controls robots to perform tasks.
Servo's/stepper-motor's encoders (as sensors) pass data of joint-angles (node-spaces) to computers.
Back-EMF (resistances) pass costs of edges (moves,) to allow to plan lower-cost routes.
Should produce codes for those, or someone else has?
If should, what language to use? Java, C++, Python, or others?
Howto produce AIs that run close-to-conscious (do all human functions, just no proof of consciousness):
Should allow robos close-to-human conscious:
Spinnaker-class network of neurons (axons do human-neuron action potentials as opposed to 1-op or 2-op function that just adds inputs to form outputs,)
+ audio-processor regions (multi layer, codes raw audio input to more compressed forms such as variations of tone, direction or loudnees, more layers up would code to syllables or objects, more layers up codes to words or audio-derived motions of objects around yoo)
+ visual-processor regions (multi layer; low layers would code photons to variations of color or brightness, upper layers would code to geoooo info, such as structures or toolssss)
+ gustation-processor regions (would code from chemical-sensors to info about compositions of molecules)
+ somatosensor-processor regions (would code from hot-sensors/cold-sensors/pressure-sensors to geo info about structures, plus proprioception)
+ thamalus region to hookup sensors (such as howto position "up" based off of vision or proprioception, how to do location of structures from vision + audio + somatosensor)
+ hippocampus to form memories from sensors
+ neocortex region for pattern-recognition-units to form long-term-memories and learn howto do work from unconscious playback from hippocampus
+ mirror neurons to form inputs to thalamus/hippocampus from new stuff those around you use tools for, to allow to figure out howto perform new labors or use new tools
+ default mode network for introspections (such as to lookup memories of emotions from hormones + memories of thoughts/ideas + memories of learned work + memories of how others do work or behaviours, to form new solutions)
+ limbic systems for hormones (such as hormones that alter how much of processors to use to process surroundiings, versus how much to use for introspections/new plans)
+ close-to-human-hulls controlled from servos/motors (or simulator that allows to move around virtual worlds) that allows human motions and has inputs for all sensors.
Am not sure if this would have extra uses for us,
or if just calculus to program autonomous robos would do for us.
This is more difficult to do,
but such conscious would allow to solve political problems for us beyond just how to program autonomous robos to produce for us.
If this would do, what languages to use?
Used Sakura School and Bud to produce computer-simulations of how to run simple autonomous robos:
What's good apps to use to do more than simple macros?
Of others' autonomous robots (mashup from lots of sources) was simple macros (suchas Grasshopper's at https://grasshopperdocs.com/addons/robots.html and Kuka's at https://www.kuka.com/en-us/products/robotics-systems/software/simulation-planning-optimization/kuka_sim ):
Was the problem so-far just that most of those robots used to cost over 10,000 dollars to produce?
The robots from the mashup = Facteons/Kukas/Fanucs,
most use servos with outputs from 1kW to 12kWs.
The robots are formed just from: CPUs, hulls, transmissions and servos.
Tons of 2ghz+ CPUs (for under 10 dollars) from lots of sources
Iron or aluminum is affordable (for hulls of robos) to mass produce.
Robos could mass-produce transmissions (Youtube has shows of robos that assemble own motors,) or you could use direct-drive servo motors.
4kw servos for 52 dollars should allow you to form autonomous production.
Should allow to produce for around 266 dollars for us.
(No affiliations) Amazon has sources for those servos suchas:
https://www.amazon.com/ASMC-04B-Support-12V-24V-180kg-cm-Quadcopter/dp/B07GDJBDW9/
Robots mass-produce own motors:
Examples of howto setup APXR as artificial CNS; https://github.com/Rober-t/apxr_run/blob/master/src/examples/
Examples of howto setup HSOM as artificial CNS; https://github.com/CarsonScott/HSOM/tree/master/examples
Examples of goals for robots: blueprint scans (or AutoCad outputs,) such as houses.
For graph searches; graph nodes = positions of parts (such as pieces of wood, bricks or stones,) edges = robot's motions.
For artificial CNS; CNS training inputs = blueprint scans or AutoCad outputs, training outputs = data (from sensors) of humans that produce goods from those blueprint-scans/AutoCad-outputs.
Production uses for CNS; put scans to CNS, and CNS outputs motions for robots to produce those goods.
Just as human's (billions of neurons plus hundreds of layers of cortices) CNS can parse old blueprints (plus watch other humans use those old blueprints to produce goods such as houses) -- to setup our synapses to parse new blueprints to figure out how to use tools to produce new goods --
so too could robot's (trillions of artificial neurons plus thousands of layers of cortices) artificial CNS do this.
Just as humans can watch amateurs produce prototypes (plus parse blueprints later based off of those prototypes to mass produce goods) --to setup the synapses of our CNS to allow us to produce new blueprints from new examples (to mass produce new goods) --
so too could an artificial CNS do this.
"HSOM" (license is FLOSS) has simple Python artificial neural networks/maps which could run bots to do simple conversations (such as "ChatGPT 4.0" or "Claude-3 Opus",) but not close to complex enough to house human consciousness: https://github.com/CarsonScott/HSOM
"apxr_run" (https://github.com/Rober-t/apxr_run/ , license is FLOSS) is almost complex enough to house human consciousness;
Various FLOSS neural network activation functions (absolute, average, standard deviation, sqrt, sin, tanh, log, sigmoid, cos), plus sensor functions (vector difference, quadratic, multiquadric, saturation [+D-zone], gaussian, cartesian/planar/polar distances) https://github.com/Rober-t/apxr_run/blob/master/src/lib/functions.erl ;
Various FLOSS neuroplastic functions (self-modulation, Hebbian function, Oja's function) https://github.com/Rober-t/apxr_run/blob/master/src/lib/plasticity.erl ;
Various FLOSS neural network input aggregator functions (dot products, product of differences, mult products) https://github.com/Rober-t/apxr_run/blob/master/src/agent_mgr/signal_aggregator.erl ;
Various simulated-annealing functions for artificial neural networks (dynamic [+ random], active [+ random], current [+ random], all [+ random]) https://github.com/Rober-t/apxr_run/blob/master/src/lib/tuning_selection.erl ;
Choices to evolve connections through Darwinian or Lamarkian formulas: https://github.com/Rober-t/apxr_run/blob/master/src/agent_mgr/neuron.erl .
To run artificial CNS more fast (lag less,) vectorize/parallelize this and/or use TensorFlow's MapReduce:
Resources for more complex CNS:
Most of the responses to questions such as "How come no autonomous robots allowed to produce stuf outdoors, such as houses?" amount to "Security/safety risks," although autonomous vehicles that operate with controlled environments have shown reduced risk of crashes, so is obvious that the world needs more virus analysis tools to secure us.
All message responses go towards future posts.
How computers can process visuals of humans who do manual work (such as produce houses,) to program robots to do thus, + how to optimize various graph searches (such as BFS/DFS/IDDFS/Dijkstra's) to plan paths: https://poe.com/s/qYdJrILFHzHSiSL7c1JS
https://github.com/SwuduSusuwu/SubStack/issues/2 shows progress to such autonomous tools