Full width home advertisement

Post Page Advertisement [Top]

I am immensely excited that MIT has started a class on AGI. Lex Fridman is lecturing and I pray that I can catch a flight and attend it in person. Before that happens, I want to share something. I was able to connect with the class online, and post a message on Slack for the entire community. Here is the script:

Hey Lex, Ahmad here. While I wait for the AGI lectures, I am completing the ones from DeepCars.
I want to congratulate you being the person who gets to officially (academically) promote AGI.

tldr: i talk about NN not being BBs and how we can solve AGI, based on def of G and I (my insight)

I have reviewed the Lecture 1 slides and I really like the MIT's AGI Mission Goals: that the hype does distort and kill the purpose of a field. Personally, as a computer scientist, I believe that neural networks can be reified into linear mappings and they are not black-boxes. Hyperplane and dimensionality reductions have not been given their due attention with NN in Machine Learning. Isn't a 3D graphic model simulation also binary data in bits ? Thus, we can claim that it can all be reduced down. While we can visualize feature maps and where they form, we can also form an idea of their spatial relationships within the network, based on activations (where and how) and this gives us insight to how the NN and its components work. I hardly believe NNs are to be trained using only hyper-parameter tuning, as is done often.

This brings us down to AGI. To form it, we have to define it, because there are different ideas floating around, some by sets of scientists in different fields, and some inspired by sci-fi.

AGI Roadmap
What are our goals ?

  1. Replicate the human physical/metaphysical duality and develop human-like social robot class?
  2. Reach technological singularity with a virtual AGI agent, that is connected to the Internet and becomes an ASI? Imagine each country having its own ASI based on its own language/rules/traditions. Note: After reaching ASI, it cannot be an agent ! This agent could control a country's defense system, infrastructure and all other devices on the local intranet.
  3. Create agents that can allow humans to transfer their "mind" and exist immortally, devoid of physical deterioration. Note: from a 3P-POV, an agent and environment (nature of duality), over infinite time, has no uniqueness or randomness, hence individuality and agency makes no sense.

Human level intelligence:
-Do we create an AGI using a cognitive approach that maps the mental capacties of a mature human adult ?
-Do we create an AGI that has the structure of a human child brain that can develop ontologies and appropriate data representation, as it develops and learn language(s) ?

If the GI stands for:

how human intelligence works, then AGI has to be reduced to biological fitness. This definition would make sense if we are looking for everyday cognitive robots that can pass all the AGI tests [1]. General intelligence in a human environment is an emergent property of humans, of how our genes have evolved our behaviour to enable survival, reproduction...and so on. This means that an agent would be designed bottom up, using a connectionist approach and be finalized as a human child's brain. This brain would then have to learn over time and mature, before yielding intelligence. Neuroevolution is making its rounds these days, beating DRL for general purpose game-playing, but those are virtual agents.

It is important to know that human intelligence is not restricted to our behaviour, meaning a robot cannot be human-level intelligence, just by having adequate software. The entire human body itself is intelligent, as it evolves over time, for repair and adaptation, but by communication from the brain. A human has cells that act as microbodies, that can serve as actuators within the body, whereas robots do not yet have nanomachines that can affect the electromechanical system within them, like oil their gears, or tune the motors.

I have a framework of a cognitive architecture that I designed for humanoid robots. This arch. is designed to mimic the human mind and our thought process, but using a GA to control all its variables seems inadequate to me...

Why? This brings us to our goal for AGI: do we want submissive robots ? Do we want an IoT connected OS for entire cities or governments, backed by blockchain ("safety") ? Do we want "conscious" self-replicating machines ? Based on the definition of G, we can choose whether to implement Maslow's Hierarchy of Needs within a humanoid robot, or implement Biological Imperatives, as mentioned above. But we don't see the point of robots having survival, territorialism, competition, reproduction, quality of life-seeking, and group forming...because that is "dangerous", as is commonly believed.

Furthermore, what this will do, is insert another class into the human society and cause unprecedented effects, something that MIT Mission Goals is against :)



Then comes the I part...
Intelligence. As an emergent behaviour of many smaller tasks ? Some claim it is notion of generalizing itself, that is intelligence.

If G,I then:
If the generality and intelligence in an AGI agent are defined as I have concluded, then we can build a framework of it without exploring further ML [2]. Then surely we have to train networks to be compatible with a memory store, where data can be represented in hierarchical stores, with respect to time. Spatio-Temporal abstractions of data are required (STA), which can then be stored in hypernetworks (see: hypergraphs).

Let me conclude with how this solves the AGI problem while DL lacks behind in Clustering and Dimensionality Reduction when it comes to using DNN:

These hypernetworks themselves are modelled after ontologies that have been developed by humans over time, as our minds have developed. Right now, we are lacking adequate knowledge representation, especially when it comes to concept formation. DL cannot deal with hierarchical representations either. The approach with ontologies, is what allows STA of data in an hierarchy. To a machine, an infinite ontology will have no semantic value, but to a human, the passage of time has strange effects :) (leaving the details)...until we can develop metacogniton for it, generating models of percept sequences and action sequences, relative to the mapping of environment. This way, a recurring thought process is built for an agent, where it can access its goals and if its current actions are aligned towards it...

How to build it ?

Once we have chosen a model for an AGI agent, we can talk about how to build it.


  1. A biologically inspired approach would suggest using only neural networks: Currently DL using NN only tackles the problems of regression and classification. We can also use RBMs or a Hopfield Network to contain input patterns as perceptions, similar to RNNs for memory. While clustering and dimensionality reductions haven't been explored, this does not mean we cannot use alts.
  2. An alternative approach would suggest a hybrid design of a cognitive architecture...

Transfer Learning: STA allows transferring skills as data is not domain dependent
Supervised Data: AGI can learn from Imitation, Transfer and Reinforcement against Unsupervised Data
Rewards: Depending on G and I: goals and sub-goals can be programmed or generated based on intents/beliefs from Metacognition
Fully Automated: a GA can control certain aspects of an AGI, but do we really need a fully automated robot ? I believe we need an agent that can assume multiple roles and perform them in different domains, but not everything...

With the narrative of AGI shifting towards an agent acquiring a language and using it to develop social skills:

Generalization can refer to the abstraction and reification of spatio-temporal data, based on a language i.e if a concept exists for it. It could be formed too, similar to how languages evolve.
Then intelligence could refer to how the usage of this language is used to communicate with the environment and other agents to bring about behaviour, either individual or social.

Note:
This message has been slightly clarified from the original message that was posted. I am in the process of modifying this post to make it clearer.

Appendix:
IoT - Internet of Things
STA - Spatial-Temporal Abstraction

References:
[1]: Muehlhauser, Luke. "What is AGI?". Machine Intelligence Research Institute. Retrieved 1 May 2014.
[3]: https://twitter.com/ahmadovich_/status/949668773262430208

No comments:

Post a Comment

Bottom Ad [Post Page]

| Designed by Colorlib