For many years, I’ve been really suffering from the problem of information overload: I’m subscribed to many content feeds, which are very valuable to me, but don’t have any way to effectively consume them – even if I’d sit all day & read them, I won’t be able to cover all of them.
Thinking about this problem I understood 3 things:
- The problem starts with the medium – the interface in which we consume content feeds today is like a baggage carousel, where you need to look at a list of items, arriving in random order, until you see what you’re looking for. This interface is somewhat effective with few dozens items, but will never work with thousands. On the other hand, an interface such as a supermarket is effective for selecting a few items out of thousands of items, because it’s an organized space that you can walk into & see all items, & since you go there frequently you know exactly where to find what interests you & discover more along the way.
- There are many good technologies today to automatically process large amounts of information, in order to enable people to make value out of it, but to use these technologies you need a team of engineers working for weeks or months building a solution for some specific content feed & use-case. This is similar to the situation before the Web, where if you wanted to connect to some information you needed to build the software, protocols & infrastructure for accessing information remotely. The Web introduced a simple & powerful standard way to access information, that removed these barriers & enabled anyone to publish & access information from anywhere in the world. Similarly, we need a simple & powerful way to enable anyone to process any content feed & make value out of it. This should be as simple as writing a simple HTML file.
- Until today, every person needed just an internet connection & a device to access information – something to run a browser on & access information & services. However, by now we stretched to the limit our ability to consume & make value from the sea of information available to us – Slack/Email/Articles/Twitter/Data/Opportunities/Events/&c. The introduction of powerful machine-learning & autonomous software agents can now enable every person to have a team of bots working for him by processing information & creating value out of it. So, a browser & internet connection is not enough anymore – you need another software/service to cope with the never ending streams of information flooding you, that needs to be available & affordable for everyone, just like an internet connection & a browser.
So I set out to design an architecture & solution based on these ideas, which I call the Web Wide Matrix. Inspired by the Matrix movie, it is a
- Virtual Reality interface for consuming content feeds as an organized space
- generated by a personal team of hacker bots, working inside software hovercrafts to process your content feeds
- using training courses that are as simple to write as HTML documents
To make this a reality, I’m building this as an open-source initiative, led by a non-profit organization (called the Wachowski groupoid, in honor of the Matrix creators), currently consisting of just myself. I’ve written an initial POC & put up an initial web site for the project. Check it out to learn more & drop me a note if you’d like to join me in solving this problem once & for all.
I’ve been at the movie theater this weekend, & wondered again how come people group together in a dark room, shut down their consciousness, & for 2 hours live the (usually fictional) lives of other people. Edward Young said in one of his movies (A One and a Two…) that with the normal amount of movies people watch these days, they’re actually living about 5000 years.
This naturally leads me to the concept of sending our information machines to the movie theater as well. Whatever we benefit from movies, will probably benefit them as well. You could say that no, people are defined by the feelings art invokes in them, & machines have nothing to do with it. Nevertheless, I think it can be a great way to educate our androids.
& more practically, if information machines need to understand our social & business world, & be domain experts in many human fields, why shouldn’t we provide them with movie scripts, depicting scenes in various domains, & let them apply their self-organizing machine learning to make sense of these domains? Sounds like David Harel’s development paradigm.
Google is targeting YouTube these days, maybe they’ve already got some movie fans crawlers, learning the human domain.
I was thinking on the simplest way to test my emergence engine, & came up with an extremely simple task – the reactive algorithm of a thermostat: measure the temperature, & turn the heating on & off to maintain a given temperature. It sounds indeed very simple to code a program that does that, but what I’m going to experiment is how to do it without any programming.
Emergence engine is a kind of general AI, capable of achieving goals, without being programmed how to solve them. It’s based on the assumption that you don’t need to build real intelligence, rather just create many many simple software workers, having only very simple tools & logic, & let them swarm their way toward the system’s given goals.
So, here’s how I hope my engine will handle the test case:
- It should 1st learn by elicitation the model of a room, having a temperature, thermometer & heating unit.
- It should also learn the relevant beliefs on the effect of using the thermometer on the accuracy of the model, & the effect of turning the heating on & off on the room’s temperature
- It should then learn what’s the desired temperature
- From this it should start deriving action plans & execute activities to achieve the goal of maintaining the desired temperature
- It should also adapt to changes in the room, e.g., a door is open & there’s need to use more heating, or alternatively the heating doesn’t work & we need an alternative heating unit
I’m saying it but of course what’s doing all this are many collaborating agents, working together to achieve the goal. This is done by breaking the value in the goal into smaller value “summs” given to states & activities leading to the goal, & having the agents collaborate on creating all these summs.
Although the design is very simple, & intended for complete autonomous behavior, I noticed that I’ll be able to effect the engine & help it reach its goal, by changing the knowledge driving it, i.e., the learnt beliefs, according to which the agents work.
So, I can’t wait to see how the engine will handle this, which will actually test whether the simple emergence design is enough to yield emergence, even if the value it delivers is so small & simple.
I read a few years ago about the DARPA CALO project (Cognitive Agent that Learns & Organizes), or was it the PAL project (Perceptive Agent that Learns)? Anyway, I was quite amazed, because I was thinking back then about similar architecture & technologies. Well, about a month ago, they decided to actually ship the technology, & open its source!!!!
It's called OpenIRIS (http://www.openiris.org/), & it's a "Semantic Desktop", in which you work on your applications (Browser, Mail, Chat, Calendar, Tasks, Documents &c), & behind the scenes everything is analyzed & organized in a beautiful ontology (!!!) that enables you to "Integrate. Relate. Infer. Share.".
DARPA just paid researchers from some 22 universities, to actually go & implement the semantic technologies that have such huge promises, using today's paradigms & technologies.
I've started playing with it a few weeks ago, & today decided to actually use it. Well, I'm holding my hands from evangalizing (except for the post's title), but I'm quite impressed from the result! There are some small problems, & the giant platform is slightly slow, but the basics seem to work – some giant OWL-based ontology is being accumulated behind-the-scenes, & used for integrating the information. (One thing does annoy: I hope they'll switch to FireFox (instead of the old Mozilla), because I can't use a browser without my extensions…). I might even try write a plug-in for FreeMind or some other app I can't live without, & see how it works.
Thanks DARPA, SRI & all other researchers for bringing the future closer!
Update: Oops! There's only a Windows version :(… Seems like I won't be using it much, coz my primary OS is Linux. (hey, please spend the last mile effort for the sake of Linux & MacOS early adopters…)
Watching people play computer games is very informative: you watch the process in which their brain learns a complex behavior, masters it & improves its effectiveness using the score performance indicator. To make the game more intuitive, it can also provide sensual stream similar to the real world, so that the feedback on the behavior effectiveness is more affective.
As people’s work is becoming more & more computer mediated, computer games can become an interesting training method, as the military & other industries are increasingly adopting. Maybe it would eventually shorten the competence period, that as Peter Norvig claims normally requires 10 years.
Providing Computer Games for Computers may also be a nice way to train them: if they’re provided the same training games as humans have, they can employ simple machine learning & become competent in it. Compare with what David Harel suggests in “Come, Let’s Play“.