Crashing and Hacking the Smart City.

The extent to which mass urban surveillance will be tolerated in the smart cities will differ around the world. Government, with varying degrees of citizen input, will need to strike a balance between the cost of intrusion and the benefits of early detection.
Q) What would be the format of citizen input on which the Government would base its decision to the extent of surveillance? What would be the implications of buggy, brittle systems in the context of surveillance? What decisions will be based on such inputs and what conclusions will be derived?
An interesting point raised by Joy Buolamwini in her talk on algorithmic bias,

https://www.youtube.com/watch?v=lbnVu3At-0o ”

Across the US, police departments are starting to use facial recognition software in their crime fighting arsenal”
What would be the implications of using such biased systems and who monitors these biases? Who checks the accuracy and who makes them reliable?
—-
A design that works for all.
Q) Is the smart city development based on the philosophy of design for all? Is it supposed to cater to needs and preferences of all the citizens with magical systems that prove to work wonderfully for everyone? If yes, how are these systems visualized to function or are the citizens required to function according to the systems? Are these solutions inclusive and if not, who is excluded and on what basis? Are there exclusions based on age, abled and disabled bodies, rich and poor, skin color among various other factors?
—-
If the first generation of smart cities does truly prove fatally flawed, from their ashes may grow the seeds of more resilient, democratic designs.
Q) Does this imply, there is no better mechanism for smart city development than learning from failures? What would be the cost a city pays in all aspects to recover from such a failure and is it worth? Is this quest for smarter cities blinding us to some of the fatal implications it can hold in future like global warming, energy crises among others?