4 Comments

This is such a great initiative. I feel the same way at work, things are expected to happen magically using "AI". Now I have something to share when I hear this next time.

Your work is going to train a lot of smart minds and give them whole new perspective or even careers. Love it and looking forward to more of it!! Thanks again for doing this.

Expand full comment

To me, it looks like you started from the wrong end with that 'limits of prediction' course.

Trust in predictive models of human behavior is important to the current state of many academic fields. Yet, there is soft 'proof' that these models are all reduced information. We also know that we are working off of past information, and that the people being modeled may have similar information, or different information, from what the modeler has. From those elements there are some inferences that can be made, and which suggest several hard limits to forecasting human behavior.

The first question of the soft proof is 'in terms of information and complexity, can a human mind contain a full fidelity model of a human mind?' Specific answers do not really matter, so long as they are not close to 'one human mind can hold infinite human minds at perfect fidelity'. The finding necessary to the later analysis is that there is some number of minds that a human can only understand with reduced fidelity, not full fidelity.

One result is that History is hurt and helped by the fact that the information we have about previous generations is very lossy compared to what that previous generation had, or believed that it had, about itself.

Reduced fidelity modeling is a significant thing to be certain of, because it can undermine our confidence that we can measure enough to statistically estimate that one sample and another sample are the same, or that our statistical inferences are really valid.

That we form theory off past behavior is important, because there is a critical difference between human behavior and for one example, the elasticity of copper at small deformations. We sometimes talk about matter having 'memory', retaining information about this or that bit of its history. For a large enough group of humans, at least some individual humans in that group are forming their own models, and changing their behavior based on those models.

For a large enough group, for just about all history and prehistory, there has been an complicated arms race between manipulators guessing models of most individuals in the groups, and those manipulated and feeling that the manipulation is malicious being motivated to change their behavior or mental models. This may average out 'most' of the time.

An academic modeler has at least three potential problems.

1. People in the modeled group with the same information, building a similar model of behavior, and then using it to identify a choice of behavior that would break the model. Carrying out this behavior should be rare if the group does not believe the academic modeler to be a malicious manipulator.

2. People in the modeled group with different information, who change their behavior based on factors the modeler cannot predict.

3. Information transfer from the modeler, to the modeled group. The general public definitely has at least some information about claims within academia.

For specific domains of behavior forecasting, narrower issues can be found of bad models leading to bad forecasts. One of the frequent ones is identifying N-1 stages in some specialized history, then using that to forecast and push an Nth stage. Industry 4.0 is an example where I had/have profound reservations.

Where an academic field has been very careless about those three potential problems and other constraints, the academic field can be profoundly overconfident in behavior forecasts that are not correct, or can be accurately predicted to be wrong. For a chaotic system, it may be impossible to predict what actually happens, but it may be easy to verify that a specific model is probably wrong.

I think that some misuses of AI are clearly cases where an academic field's behavior forecasting confidence is at least partly wrong, and automating that forecasting does not make it better.

There is clearly also an issue of domains that do not involve human behavior, and are additionally 'well understood', where automating the human process has challenges. For a mental process, some steps may not be noticed. If you do not have a record that a mental step exists, and you can automate and verify the other steps, the missing step may have an important impact on the quality of results. For a physical process, anyone deep into manufacturing can learn that there are approaches to tasks that are still very hard to automate well, and also tasks that can be very effectively automated.

There is definitely skill in figuring out which things can be productively automated using what methods. I'm personally not interested in neural nets, I find other automation schemes less confusing. I don't have much understanding of many domains, and prefer to be confident I understand what I am doing before I automate.

Expand full comment

Have you evaluated this paper predicting economic growth at micro scale?

https://www.nber.org/papers/w29569

Expand full comment

I agree. Great initiative. Good to have guys on the inside to confirm my hunches. Probably also honed by reading Taleb.

Expand full comment