I think my comments on this weeks readings might be best addressed not by going through individual quotes, but from a more birds-eye perspective of them as a constellation of texts that address issues of great importance, but through a seemingly naive view.

The biggest takeaway from these is that this notion of feedback  can let us know about all the states of a system before we enter into it. While this is clearly not a realistic methodology for a variety of reasons*, the notion of feedback in a system is compelling. Using feedback to know what we can control and influence in an environment is a valuable tool for sure, but does this scale well?

If we are building a home, the number of factors that need to be taken into account are enormous (occupancy, space, amount of light throughout the year, use of the rooms, materials used, and so on). However, it is a manageable number of things. When we start to use these same tools to control and predict largely chaotic systems (like the environment, or capital markets), we must simplify the data, else the model will be too large for any reasonable analysis or observation. As soon as a simplification happens, we lose an accurate predictive model for feedback, and our system can quickly give us results that are bad (bad in the sense that the information is not accurate, it very well may be what we want to hear).

Applying these ideas of cybernetics to controlled systems works, but expecting that a computer model will save us all is total folly. I would hope that in the intervening years since these texts were written we’ve learned a few things about this, but given the financial, political, and social models that are devolving, despite computers telling us otherwise, I’m holding out little hope.

*for example, as a system changes, the models need to change with it. If you model your system on the initial state, and use updated data from a changed system, you’ll then need to renegotiate the system all over again to accommodate for the changes.