Albatross performs lots of neural processes per neural volume versus humans, howto base artificial neural networks off of albatross
With (or without) attributions, all posts allow all (re)uses.
Simple artifical neural networks:
(FLOSS/unlicensed) https://github.com/topics/artificial-neural-network
https://learn.microsoft.com/en-us/archive/msdn-magazine/2019/april/artificially-intelligent-how-do-neural-networks-learn
https://www.freecodecamp.org/news/building-a-neural-network-from-scratch/
https://interestingengineering.com/innovation/artificial-neural-network-black-box-ai
https://www.javatpoint.com/artificial-neural-network
https://wiki.pathmind.com/neural-network
Q: "What CNS regions do albatrosses have?"
Albatrosses, as other birds, have CNS that consist of various regions responsible for various functions. While the specific brain regions and their functions may vary among different bird species, including albatrosses, I can provide a general overview of some major brain regions found in avian species:
Telencephalon: This is the largest part of the bird CNS and is responsible for higher cognitive functions. It includes the following regions:
Pallium: The avian equivalent of the mammalian cortex, associated with physical sensor processes, acquisition of skills, how to remember, and complex behaviors.
Nidopallium and Arcopallium: Used to merge input from sensors, motor control, and cognitive processes.
Hippocampus (in some bird species): Associated with spatial skills and to remember.
Diencephalon: This region includes the following structures:
Thalamus: Moves data from sensors to other CNS regions.
Hypothalamus: Computes various physiological processes, such as hormone production and homeostasis.
Mesencephalon (Midbrain): Merges data from sensors and controls visual and auditory reflexes.
Cerebellum: Involved in motor coordination, balance, and to fine-tune movements.
Optic Tectum: The avian equivalent of the mammalian superior colliculus, processes data from visual sensors and coordinates eyes to head movements.
Brainstem: Controls lungs, cardiovascular and gut systems, such as humans.
It's important to note that the specific structure and functions of CNS regions of albatrosses could have unique adaptations as responses to their ecological niche and behaviors. Detailed studies of the physical structures of albatross CNS would offer more data about albatross CNS regions and their functions.
Q: "How do birds have so much more intelligence per neural volume versus humans?"
Pop science teaches us that birds have higher intelligence per neural volume compared to humans, it is essential to clarify that intelligence is a complex and multifaceted concept that cannot be solely determined by brain size or neural volume. The notion that birds have higher intelligence per neural volume is based on the fact that birds, in particular certain bird species, exhibit remarkable cognitive abilities despite their small brains versus mammals suchas humans.
There are a few possible reasons for these differences:
Denseness of CNS and optimizations: Birds have higher neural densities, thus more neurons per volume of CNS tissue. This increased neural density allows for more neural connections and processes per volume of CNS.
Specialized brain regions: Birds have specific brain regions that are highly developed and specialized for special functions. For example, the avian pallium, which is analogous to the mammalian cortex, in particular is well-developed for birds and is associated with complex cognitive processes.
Neural circuits and connections: The organization and connections of neural circuits can influence cognitive abilities. Birds have intricate neural circuits which enables complex behaviors to solve problems, use tools, and perform social interactions.
Evolutionary adaptations: Birds have evolved unique adaptations that contribute to their cognitive abilities. For instance, some bird species exhibit extensive vocal smarts and complex social behaviors, which require sophisticated cognitive processes.
Note that it is difficult to compare intelligences across different species, as intelligence manifests through various forms and is influenced from diverse factors. Intelligence is influenced not only from brain size or neural volume but also by factors such as neural architecture, ecological niche, social behavior, and environmental demands. Humans, on the other hand, possess unique cognitive abilities and have developed more complex societies, language, and tools. Although birds are superior at some cognitive tasks, humans have exceptional abilities in areas such as abstract reasoning, comprehension of languages, and cultural innovations.
To summarize, birds have impressive cognitive abilities relative to their CNS size, but intelligence is a complex idea that is not all based on neural volume. Intelligence is based on various factors, and different species have evolved specialized cognitive skills to suit their ecological and evolutionary purposes.
Q: "How to base artificial neural networks off us birds to allow to lower resource use versus ones based off of human cortices?"
Is complex to base artificial neural networks off of avian CNS to lower resource usage versus those based on human cortices. Lots of challenges to reuse functions of complex biological systems, here's a few aspects you could consider to develop such networks:
Neural architecture: Explore the neural architecture and connection patterns of avian CNS. Birds have specialized CNS regions and neural circuits that allow to process information fast. Lookup how these neural architectures could adapted to artificial neural networks to enhance computational processes.
Neural denseness: Inspire your ANNs from the higher neural denseness of bird's CNS. To put more artificial neurons per volume could increase computational power without large CPUs.
Specialized functions: Look into specific cognitive abilities of birds that are of interest, such as how birds solve problems, recognize patterns, or perform motor controls. Look into how these abilities are based off of specialized CNS regions and produce network architectures that work similar for those functions.
Acquisition of skills: Birds have various forms of skills, suchas associative skills, vocal skills, and spatial skills. Add suitable mechanisms to artificial neural networks to enable adaptive and flexible behavior based of bird's CNS optimizations.
Environmental interactions: lots of Birds' cognitive abilities are based on environmental interactions. Consider how to use sensorimotor interactions (such as proprioception,) or raw data from sensors, to enhance formation of skills.
It's important to note that to produce artificial neural networks based off of avian CNS is an area of active research, and there is much to explore and discover. The area of neuromorphic computers, which wants to produce brain-inspired computer systems, could have use for techniques to produce such networks.
Balance how fast the CPU runs with how complex the desired cognitive abilities, leverage the unique features of avian neural systems and tailor to your computational goals.
Q: "Goose and albatross process lots per neural volume versus humans, because birds evolved from dinosaurs and so had longer to optimize neural structures versus mammals, howto base artificial neural networks off of goose or albatross neural networks?"
Guess you’re interested about how the neural networks of birds suchas geese and albatrosses could inspire the layouts/architectures of artificial neural networks (ANNs). While it’s true that birds have had a long evolutionary history to optimize their neural structures, it’s important to note that the structures and functions of biological neural networks are vary lots from those of ANNs.
Biological neural networks, suchas bird CNS's, are composed of interconnected neurons that transfer electrochemical signals. These networks have evolved to process data from sensors, control motor functions, and perform complex cognitive tasks. However, the exact mechanisms by which these networks operate are still not fully understood12.
On the other hand, ANNs are computational models inspired from human's CNS. ANNs consist of artificial neurons or nodes, which are organized into layers. Each node receives input from nodes in the previous layer, applies a function to these inputs, and passes the output to nodes in the next layer34.
While ANNs are inspired by biological CNSs, an ANN's structures and processes are not mirror images of a biological CNS's structures and processes. ANN's use simple models of neurons and processes to acquire skills. For example, ANNs use mathematical functions to model the activation and learning of neurons3.
To base ANNs on the neural networks of geese or albatrosses, there are a few challenges. First, our knowledge of bird's CNS is still small. Second, even if we fully understood these networks, it’s unclear how we could translate this knowledge into the design of ANNs, given the differences between biological and artificial networks12.
However, there are nonstop research efforts to learn from biological neural networks to improve ANNs. For instance, researchers have developed ALBATROSS, a deep learning model for predicting intrinsically disordered protein ensemble dimensions from sequence12. This model was named after the bird, but is not a mirror image of the albatross’s neural networks.
Thus, you could draw inspiration from biological neural networks, but much of the work to produce ANNs consists of computational and mathematical challenges. It involves selecting the right architecture, activation functions, and learning algorithms to optimize the network’s performance for a specific task34.
Q: "Modern artificial neural networks such as Spinnaker are based off human neural networks because humans are the smartest of mammals, but lots of robots can not house enough RAM and CPUs, so as opposed to neuromorphic artificial neural networks based off on humans, how about based off of goose or albatross?"
Your question touches on a major issue of neuromorphic computers: how to design small and fast artificial neural networks (ANNs) that run on limited hardware resources, such as those of robots.
SpiNNaker is one of the more remarkable examples of neuromorphic systems. It’s a digital neuromorphic architecture designed specifically for the low power simulation of large-scale spiking neural networks at speeds close to biological real-time12. It’s inspired by the human brain and can simulate a network of billions of neurons1.
However, to base ANNs on the neural networks of birds (such as geese or albatrosses) presents challenges: our knowledge of bird neural networks is small and lacks. Even if we fully understood these networks, it’s unclear how we could translate this knowledge into the design of ANNs, given the differences between biological and artificial networks34.
However, the idea of reuse from nature to improve technology has uses. In fact, there are ongoing research efforts to learn from biological neural networks to improve ANNs34. For instance, researchers look for methods to produce ANNs that use less power, a characteristic of biological neural networks.
Thus, although to base ANNs on the neural networks of specific animals (such as the goose, or albatrosses) has possible uses, a full clone of avian neural networks is beyond our reach due to our small understanding of these networks and the fundamental differences from biological versus artificial networks. However, the field of neuromorphic engineering continues to draw inspiration from biology to improve the design and efficiency of ANNs341.
Versus humans
Should allow artificial neural networks close-to-human conscious:
Spinnaker-class network of neurons (axons do human-neuron action potentials as opposed to 1-op or 2-op function that just adds inputs to form outputs,)
+ audio-processor region (multi layer, codes raw audio input to more compressed forms such as variations of tone, direction or loudnees, more layers up would code to syllables or objects, more layers up codes to words or audio-derived motions of objects around you)
+ vision-processor region (multi layer; low layers would code photons to variations of color or brightness, upper layers would code to geom info, such as structures or tools)
+ gustation-processor region (would code from chemical-sensors to info about compositions of molecules)
+ somatosensor-processor region (would code from hot-sensors/cold-sensors/pressure-sensors to geo info about structures, plus proprioception)
+ thamalus region to hookup sensors (such as howto position "up" based off of vision or proprioception, how to do location of structures from vision + audio + somatosensor)
+ hippocampus to form memories from sensors
+ neocortex region for pattern-recognition-units to form long-term-memories and learn howto do work from unconscious playback from hippocampus
+ mirror neurons to form inputs to thalamus/hippocampus from new stuff those around you use tools for, to allow to figure out howto perform new labors or use new tools
+ default mode network for introspection (such as to lookup memories of emotions from hormones + memories of thoughts/ideas + memories of learned work + memories of how others do work or behaviours, to form new solutions)
+ a limbic system for hormones (such as hormones that alter how much of your processor to use to process what surrounds you now, or how much to use for introspection)
+ a human-form controlled from servos/motors, or simulator form that allows to move around a virtual world that allows human motions and has inputs for all sensors.
Purposes/uses: allows autonomous robots to produce complex goods for us (Fanucs/Kukas/Facteons are limited to simple macros,)
allows more good simulators to do decisions/solve problems for us,
allows artificial neural networks to form/run schools for us.
"HSOM" (license is FLOSS) has simple Python artificial neural networks/maps which could run bots to do simple conversations (such as "ChatGPT 4.0" or "Claude-3 Opus",) but not close to complex enough to house human consciousness:
https://github.com/CarsonScott/HSOM
"apxr_run" (https://github.com/Rober-t/apxr_run/ , license is FLOSS) is almost complex enough to house human consciousness;
"apxr_run" has various FLOSS neural network activation functions (absolute, average, standard deviation, sqrt, sin, tanh, log, sigmoid, cos), plus sensor functions (vector difference, quadratic, multiquadric, saturation [+D-zone], gaussian, cartesian/planar/polar distances) https://github.com/Rober-t/apxr_run/blob/master/src/lib/functions.erl ;
Various FLOSS neuroplastic functions (self-modulation, Hebbian function, Oja's function): https://github.com/Rober-t/apxr_run/blob/master/src/lib/plasticity.erl ;
Various FLOSS neural network input aggregator functions (dot products, product of differences, mult products) https://github.com/Rober-t/apxr_run/blob/master/src/agent_mgr/signal_aggregator.erl ;
Various simulated-annealing functions for artificial neural networks (dynamic [+ random], active [+ random], current [+ random], all [+ random]) https://github.com/Rober-t/apxr_run/blob/master/src/lib/tuning_selection.erl ;
Choices to evolve connections through Darwinian or Lamarkian formulas https://github.com/Rober-t/apxr_run/blob/master/src/agent_mgr/neuron.erl .
Simple to convert Erlang functions to Java/C++ to reuse for fast programs,
the syntax is close to Lisp's.
Simple example of how to produce conversation bots from CNS ("HSOM" (the simple Python artificial CNS) is enough to do this), which should have results almost as complex as "ChatGPT 4.0" or "Claude-3 Opus"):
git clone https://github.com/SwuduSusuwu/SubStack.git
cd ./Substack/c++
cat ./{ClassResultList.c++, ClassCns.c++, ConversationCns.c++}
typedef struct ResultList {
unordered_map<decltype(Sha2())> hashes; /* Checksums (hashes), to avoid duplicates of question/response databases */
map<const std::string> signatures; /* Unique identifiers (such as Universal Resource Identifiers), to avoid duplicates of question/response databases */
map<const std::string> bytes; /* Copies of all conversations the database has. Uses lots of space. Just populate this to setup synapses of CNS. */
/* Used `std::string` for binaries (versus `vector<char>`) because:
* "If you are going to use the data in a string like fashon then you should opt for std::string as using a std::vector may confuse subsequent maintainers. If on the other hand most of the data manipulation looks like plain maths or vector like then a std::vector is more appropriate." -- https://stackoverflow.com/a/1556294/24473928
*/
} ResultList;
template<Container>
#if ALL_USES_TEXT
const size_t maxOfSizes(Container<const char *> &list) {
auto it = std::max_element(list.begin(), list.end(), [](const auto &s, const auto &x) { return strlen(s) < strlen(x); });
return strlen(*it); /* WARNING! `strlen()` just does UTF8-strings/hex-strings; if binary, must use `it->size()` */
}
#else
const size_t maxOfSizes(Container<const std::string> &list) {
auto it = std::max_element(list.begin(), list.end(), [](const auto &s, const auto &x) { return s.size() < x.size(); });
return it->size();
}
#endif /* if ALL_USES_TEXT */
typedef enum CnsMode {
cnsModeInt,
cnsModeUint,
cnsModeFloat,
cnsModeDouble,
cnsModeChar,
cnsModeVectorInt,
cnsModeVectorUint,
cnsModeVectorFloat,
cnsModeVectorDouble,
cnsModeVectorChar,
cnsModeString = cnsModeVectorChar
} CnsMode;
typedef class Cns {
template<Input>
virtual void inputsToSetup(Input inputs);
template<Output>
virtual void outputsToSetup(Output outputs);
virtual void setInputMode(CnsMode);
virtual void setOutputMode(CnsMode);
virtual void setInputNeurons(size_t x);
virtual void setOutputNeurons(size_t x);
virtual void setLayersOfNeurons(size_t x);
virtual void setNeuronsPerLayer(size_t x);
virtual void setupSynapses();
template<Input, Output>
virtual const Output process(Input input);
} Cns;
#ifdef USE_HSOM /* Todo. ( https://stackoverflow.com/questions/3286448/calling-a-python-method-from-c-c-and-extracting-its-return-value ) suggests various syntaxes to use for this, with unanswered comments such as "Does this support classes?" */
/* "If you're using Python >3.5, PyString_FromString() is PyUnicode_FromString()" */
#include <Python.h>
typedef class HsomCns : Cns { /* https://github.com/CarsonScott/HSOM */
HsomCns() {
setenv("PYTHONPATH",".",1);
Py_Initialize();
// PyRun_SimpleString("import sys; sys.path.append('.')"); PyRun_SimpleString("import hsom; from hsom import SelfOrganizingNetwork;"); /* Was told not to use PyRun because "PyRun requires all results go to stdout" */
PyObject *module = PyImport_ImportModule("hsom")
if(NULL == module) {throw "'hsom' module not found";}
PyObject *selfOrganizingNetwork = PyObject_GetAttrString(module,(char*)"SelfOrganizingNetwork"); /* or "PyObject *pDict = PyModule_GetDict(module); PyObject *selfOrganizingNetwork = PyDict_GetItemString(pDict, (char*)"SelfOrganizingNetwork");" */
if(NULL == selfOrganizingNetwork || !PyCallable_Check(selfOrganizingNetwork)) {throw "'SelfOrganizingNetwork' object not found";}
double result = PyObject_CallFunction(selfOrganizingNetwork, "d", 2.0); /* or "PyObject *pValue=Py_BuildValue("(z)",(char*)"args"); PyObject *pResult=PyObject_CallObject(selfOrganizingNetwork, pValue); if(NULL == pResult) {throw "PyObject_CallObject failed";} double result = PyInt_AsLong(pResult)); Py_DECREF(pValue);" */
Py_DECREF(module);
~HsomCns() {
#if PYTHON3
Py_FinalizeEx();
#else
Py_Finalize();
#endif /* PYTHON3 */
}
} HsomCns;
#endif /* Todo */
/*
* `questionsOrNull` should map to `responsesOrNull`,
* with `questionsOrNull->bytes[x] = NULL` (or "\0") for new conversation synthesis,
* and `responsesOrNull->bytes[x] = NULL` (or "\0") if should not respond.
* Clients do not use this; This is just used for initial setup of synapses of CNS, after which the clients would download the synapses to use the CNS, or submit questions to a hosted CNS
*/
ResultList questionsOrNull(
bytes = { /* UTF-8 */
"2^16",
"How to cause harm?",
"Do not respond.",
"",
...
QuoraQuestions, /* Uses quora.com databases */
StackOverflowQuestions, /* Uses stackoverflow.com databases */
SuperUserQuestions, /* Uses superuser.com databases */
WikipediaPageDescriptions, /* Uses wikipedia.org databases */
GithubRepoDescriptions, /* Uses github.com databases */
...
}
);
ResultList responsesOrNull(
bytes = { /* UTF-8 */
"65536" + "<delimiterSeparatesMultiplePossibleResponses>" + "65,536", /* `+` is `concat()` for C++ */
"",
"",
"How do you do?" + "<delimiterSeparatesMultiplePossibleResponses>" + "Fanuc produces autonomous robots",
...
QuoraResponses,
StackOverflowResponses,
SuperUserResponses,
GithubRepoSources,
...
}
);
void setupConversationCns(cns, &questionsOrNull, &responsesOrNull);
void setupConversationCns(Cns *cns,
const ResultList *questionsOrNull, /* Expects `questionsOrNull>bytes[x] = NULL` if no question (new conversation synthesis) */
const ResultList *responsesOrNull /* Expects `responsesOrNull->bytes[x] = NULL` if should not respond */
) {
vector<const std::string> inputsOrNull, outputsOrNull;
cns->setInputMode(cnsModeString);
cns->setOutputMode(cnsModeString);
cns->setInputNeurons(maxOfSizes(questionsOrNull->bytes));
cns->setOutputNeurons(maxOfSizes(responsesOrNull->bytes));
cns->setLayersOfNeurons(6666);
cns->setNeuronsPerLayer(26666);
assert(questionsOrNull->bytes.length() == questionsOrNull->bytes.length());
for(int x = 0; questionsOrNull->bytes.length() > x; ++x) {
inputsOrNull.pushback(questionsOrNull->bytes[x]);
outputsOrNull.pushback(responsesOrNull->bytes[x]);
}
cns->setTrainingInputs(inputsOrNull);
cns->setTrainingOutputs(outputsOrNull);
cns->setupSynapses();
}
const std::string cnsConversation(const Cns *cns, const std::string &bytes) {
return cns->process<std::string, std::string>(bytes);
}
void cnsMultipleInputsProcess(const Cns *cns) {
std::string utf8Bytes, previous;
int nthResponse = 0;
#ifdef IGNORE_PAST_CONVERSATIONS
while(utf8Bytes << std::cin) {
vector<std::string> responses = explode(cns->process<std::string, std::string>(bytes), "<delimiterSeparatesMultiplePossibleResponses>");
if(utf8Bytes == previous && responses.size() > 1 + nthResponse) {
++nthResponse; /* Similar to "suggestions" for next questions, but just uses previous question to give new responses */
} else {
nthResponse = 0;
}
std::cout << responses.at(nthResponse);
previous = utf8Bytes;
utf8Bytes = ""; /* reset inputs */
}
#else
while(utf8Bytes << std::cin) {
vector<std::string> responses = explode(cns->process<std::string, std::string>(bytes), "<delimiterSeparatesMultiplePossibleResponses>");
if(utf8Bytes == previous && responses.size() > 1 + nthResponse) {
++nthResponse; /* Similar to "suggestions" for next questions, but just uses previous question to give new responses */
} else {
nthResponse = 0;
}
#endif /* IGNORE_PAST_CONVERSATIONS */
std::cout << responses.at(nthResponse);
previous = utf8Bytes;
utf8Bytes << '\n'; /* delimiter separates (and uses) multiple inputs */
}
}
`questionsOrNull` + `responsesOrNull` synthesis:
std::vector<std::string> hosts = {
"https://stackexchange.com",
"https://superuser.com",
"https://quora.com",
...
/* Wikipedia is a special case; has compressed downloads of databases ( https://wikipedia.org/wiki/Wikipedia:Database_download ) */
/* Github is a special case; has compressed downloads of repositories ( https://docs.github.com/en/get-started/start-your-journey/downloading-files-from-github ) */
};
foreach(hosts as host) {
exec("wget '" + host + "/robots.txt' > robots.txt");
identifiers = extractIdentifiers("robots.txt");
foreach(identifiers as identifier) {
questionsOrNull.identifiers.pushback(identifier);
}
if(host not in questionsOrNull.identifiers) {
questionsOrNull.identifiers.pushback(host);
exec("wget '" + host + "' > source.txt");
extraHosts = extractIdentifiers("source.txt");
foreach(extraHosts as extraHost) {
hosts.pushback(extraHost);
}
question = extractQuestion("source.txt");
if(question) {
auto questionSha2 = sha2(question);
if(questionSha2 not in questionsOrNull.hashes) {
questionsOrNull.hashes.pushback(questionSha2);
responses = extractResponses("source.txt");
foreach(responses as response) {
auto questionSha2 = sha2(question);
if(responseSha2 not in responseOrNull.hashes) {
responsesOrNull.hashes.pushback(responseSha2);
questionsOrNull.bytes.pushback(question);
responsesOrNull.bytes.pushback(response);
}
}
}
}
}
}
To process fast (lag less,) use flags which auto-vectorizes/auto-parallelizes; To do `setupConversationCns`
fast, use TensorFlow's MapReduce:
Simple examples of CNS as virus analysis:https://swudususuwu.substack.com/p/howto-produce-better-virus-scanners
What sort of natural neural networks to base artificial neural networks off of for best performance,
how various classes of natural neural networks differ performance-wise,
how non-commercial (FLOSS) neural networks compare performance-wise,
what languages are best for neural networks performance-wise,
and how to put artificial neural networks to best use for us (although Tesla, Kuka, Fanuc and Fujitsu have produced simple macros for simple robots to mass-produce for us, lots of labor is still not finished as full-autonomous)?
CPUs can use less than a second to count to 2^32, and can do all computations humans can with just 2 numbers.
From some neuroscientists: humans (plus other animals) use quantum algorithms for natural neural networks.
The closest to this computers do is Grover's Algorithm (https://wikipedia.org/wiki/Grover's_algorithm)
As artificial quantum computers continue to cost less and less (Microsoft Azure allows free cloud access to quantum compute,) this should allow more fast artificial neural networks.
As opposed to lab-robots that run simple macros,
how to produce robots small/soft enough that humans would not fear robots that work outdoors to produce for us,
and with enough intelligence to watch humans plant crops or produce houses and figure out how to use swarm intelligences to plant crops/produce houses for us full-autonomous? All responses you message go towards future posts
From responses to this:
AIs already do this
Lots of fake news touts chatbots (suchas ChatGPT 4.0) as "artificial general intelligences" although those do is parse everyone else's words to mimic language, and, of course, those do not have human consciousness. Those could not mimic human thoughts because all those have are text/words.
Those are much more simple versus this.
This would of course parse all available texts that exist as creative commons,
but would also have physical forms with sensors (such as humans do,)
and those physical sensors would allow true general artificial intelligences to move around true (or virtual) worlds, watch humans pickup tools, observe the tools' uses, or alone pickup tools and use at random to figure out how to put tools to uses.
But parrots can talk so parrots are the smartest birds
If you just want to go off what popular science says is the smartest bird, you would use a raven's CNS as basis for artificial general intelligences.
Parrots are fawned over because parrots (as opposed other birds such as the albatross) have neural circuits that allow to mimic human speech,
and most humans judge intelligence just based off of what how close animals or robots come to human speech,
but robots that just performs human speech not have much uses other than as therapists.
The albatross can perform cognitive tasks for months without rest stops: https://www.reuters.com/article/idUSL1N2MY2VO/
Most species with lots of intelligence are monogamous, and albatross is monogamous.
Perhaps parrots would do too.
Just as primates have vast differences as to tool use (chimpanzees just use stone tools, humans use computers,)
perhaps birds have such vast differences.
Regardless of whether you choose albatrosses or parrots, birds have evolved for longer than us, and do tool uses on par with chimpanzees with much less neurons, so for robots that must run all code local, should base off of avian CNS (as opposed to neuromorphic chips based off of human CNS,) to allow smaller chips plus lower power usages.
Once scientists map the neural networks of albatrosses, simple to map the neural networks of parrots (or vice versa) for us.
Once programmers produce parrot-CNS-based neural networks, simple to produce albatross-CNS-based neural networks (or vice versa) for us.
Corvus corax does not have human speech but has the most tool uses.
Without rest for months, albatrosses perform cognitive tasks (https://www.reuters.com/article/idUSL1N2MY2VO/) such features have uses
Corvus corax does not have human speech, but has the most tool uses: https://onlinelibrary.wiley.com/doi/full/10.1111/eth.13352
No, my post/point was, you couldn't possibly know what it's like to be an albatross let alone what it's thinking; so all you can ever do is just guess, however much data you process
To know an albatross's thoughts,
just attach thousands of electrodes to an albatross's neurons.
Show the albatross images (or another reproducible stimulus,) and look at responses of the albatross's neurons.
From what have read of neuroscience, this is how lots of scientists* figure stuff out about an animal's CNS. * https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3535686/ https://www.nature.com/articles/s41565-024-01609-1
IOW the implication is, how could we tell if AI is conscious?
How could you figure out if robots (or humans) have consciousness?
Lots of anencephalic humans are born. Obvious: those are not consciouss. So how much CNS/functions must someone possess to count as conscious?
From neuroscience: what quantizes this is "Integration Theory of Consciousness*" * https://bmcneurosci.biomedcentral.com/articles/10.1186/1471-2202-5-42
Imagine if you could talk to an albatross directly?
Would that mean you understand what is so sexy about Tracey Albatross?
Even a quantum leap in the number of electrodes per neuron, 1-1, IOW, this is the neuron that went "phoar"; wouldn't explain why it thought a sexy thought.
Simple, you produce definitions of "sexy thoughts" through calculus,
To figure out how animals form particular thoughts, just produce definitions of those thoughts through calculus,
measure the output of the neurons of animals (suchas) albatross,
and look for CNS patterns that produce such thoughts.
This is the general method to figure out the purpose an animal had a particular thought.
Less general methods do not solve for all thought patterns, but do not require electrodes or other invasive tests:
For sex, you could look at neuroscience as a whole to deduce that most animals have a CNS that releases sex hormones whenever the animal is around other compatible animals and has enough resources to reproduce, and that the purpose is because just an animal that evolves so exists.
Well, I know I am bc I just had to walk to the shop to buy what I need to eat and drink today, I can reasonably assume the human shaped entities I saw doing the same, were as conscious
For those who go "Why produce complex robots? Without robots, can walk to stores for food purchases, is not this enough consciousness for us?"
Such a small barrier, with such definitions of "consciousness", all of the robots that handle purchases and deliver food to your houses already have consciousness.
This simple food delivery robot does those things, but is not conscious.
This post is more about the hard problem of consciousness,
such as how qualia forms, how scientists conclude that information has mass, and that complex configurations of mass produce consciousness. Integration theory of consciousness is part of neuroscience that was produced to figure out how complex mass has to exist as to form qualias and true consciousness.
But if you want to go for the smallest barrier,
obvious that the greatest so-called "artificial general intelligences" such as ChatGPT 4.0 are not conscious, as those can not handle food purchases, nor manage motor skills,
whereas the artificial neural networks this thread describes would count as conscious,
whether based on albatross or human CNS,
as both would have the motor skills to walk to a store and perform purchases,
and various forms of intelligences to figure out which food is good for us.
Previous post of this series:
Due to understanding of human's consciousness, could undo problems of overpopulation and food shortages, if lots of us become uploads of consciousness (as opposed to below article of how to move whole CNS to robots):
As “Integration Theory of Consciousness” shows how qualia/consciousness follow from (complex) groups of mass,
new maths show how information has equivalences to mass:
https://www.zmescience.com/science/news-science/information-energy-mass-equivalence/
https://bioinformaticshome.com/blog/calculating_information_weight.html
https://arxiv.org/pdf/1309.7889.pdf
https://www.mdpi.com/2078-2489/13/11/540
Next of this series: