Normally software manage a model of reality. Its only way to autonomously sync this model with reality, is therefore in the only language available to interrogate reality: conducting experiments.
This basically means coming up with an hypothesis, & devising some activities whose results can potentially refute the hypothesis, for example by not conforming with a logical result (e.g., necessarily implied prediction) of the hypothesis.
How can software do that?
- Various machine learning & logical techniques can be utilized to come up with a deduced hypothesis
- Experimenting requires the ability to flip the hypothesis, i.e., understanding its negation, the antithesis
- What’s required then is the ability to predict a result of the antithesis, & a matching activity confirming the result
- The software then needs to perform the activity & measure the result
- If the result predicted from the antithesis is the case, the hypothesis is refuted
- Else the software did 1 step in validating its hypothesis, & has better synced its model of reality
- Software concludes that an employee lacks motivation. It then flips the hypothesis & finds a predicted result: if the employee is motivated, he’ll be interested in learning tips on how to improve his work & perform better. It then devises an activity of offering the employee tips for improving his work & performance & measuring whether the employee will accept the offer & learn the tips, or not.
- The software’s model says a component is running well. However, in order to sync the model, it comes up with an hypothesis that the component is actually not running. It then flips the hypothesis, & predicting the result that if the component is running, it should answer an heartbeat query with positive result. It then perform the heartbeat query activity, & measures whether the result is positive. If so, the hypothesis is wrong.
Of course this procedure covers just 1 type of rigorous experiments, & there are other types as well, such as surveys, which could prove very useful.