Structured Thinking : Analysis, Exploration, Exploitation

Moderators: Elvis, DrVolin, Jeff

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Tue Oct 23, 2012 1:56 pm
The spacing effect essentially says that if you have a question (What is the fifth letter in this random sequence you learned?), and you can only study it, say, 5 times, then your memory of the answer (e) will be strongest if you spread your 5 tries out over a long period of time - days, weeks, and months. One of the worst things you can do is blow your 5 tries within a day or two.
Most jobs and tasks are best performed by folks with certain high and low aptitude combinations (plus other things like training, of course). High aptitudes beyond job needs cause problems. The optimum combination for any given job or task resembles a recipe–a lot of some things, some of this, a bit of this, and none of that.

Just one wrong high aptitude can make a job intolerable for a person–like onions in a chocolate cake. A person with a strong knack for working with others might hate solitary work and quit, but be tremendously productive and satisfied as part of a team. Whether a high or low aptitude is good or bad depends on the context. Anything can be an advantage or disadvantage depending on the situation. Talent is no exception.

Most people have about four or five strong talents out of the roughly two dozen independent aptitudes known to exist. Most jobs require about four or five. As many as 10% of the population has double that number of aptitudes–and that is a problem for them and their employers. The Johnson O’Connor Research Foundation, the oldest aptitude-testing organization in the country, has statistical evidence that people with too many aptitudes (TMAs) are less likely to obtain advanced education and/or succeed in a career than those with an average number of talents.

Being a TMA is a very mixed blessing. Strong talents are extremely powerful internal forces. One of the most important implications of my aptitude research is the strong possibility that emotional intensity is directly correlated with the intensity of a talent. Someone operating at a high-intensity level of talent (including reasoning) will also be operating at a high-intensity level of emotion. Every thought, memory or perception is directly connected to emotion–a wholistic phenomenon.

It is quite possible that TMAs are continually operating in a hypersensitive manner. People hypersensitive to external and internal data in many forms and operating at a high emotional intensity level might very well become overstimulated. Ongoing overstimulation could explain the paralysis felt by some TMAs. They are so overwhelmed by perceptions, memories, thoughts and feelings that they can’t commit themselves to anything. Many of them need a lot of time alone to regenerate. Yet, this same turbulence can also lead to great insight and creativity.

The existence of a powerful force implies difficulty in learning to harness that force. Having a lot of strong talents is a bit like dealing with high voltage. You can do a lot of things with high voltage. However, it can also fry you. It takes a lot more knowledge and more safety precautions to work with high voltage rather than low. A lot of that voltage for TMAs is emotional. Few people know how to handle normal emotion, let alone powerful, ongoing emotion.
The Roman Room technique is an ancient and effective way of remembering unstructured information where the relationship of items of information to other items of information is not important. It functions by imagining a room (e.g. your sitting room or bedroom). Within that room are objects. The technique works by associating images with those objects. To recall information, simply take a tour around the room in your mind, visualising the known objects and their associated images.

The Roman Room technique serves as one of the bases of the extremely effective language mnemonic systems described elsewhere within Mind Tools.

Mind Tools Mnemonic Grades:

Ease of Use - easy
Effectiveness - effective
Power - quite powerful
Learning investment - moderate
Who should use - people needing to store unstructured information
on a topic.

One of life's under-regarded principles is called being Behind the Power Curve (or "on the back side of the power curve"). The analogy comes from aviation, and is illustrated in this graph, which all would do well to understand:
Problem sets offer no such consistency. A given problem might take you ten minutes. On the other hand, it might devour an entire day and still yield no progress. This inconsistency is the bane of students, like Jake, stuck in technical classes.

How do you solve hard problem sets in such a way that they can be integrated into a structured, low-stress study schedule? In this post I will present a four step process. The process is an elaboration on the advice given in Straight-A. It’s a mixture of the results of my research for this book as well as personal experience, having fought these beasts over the past seven years.

A Four Step Process for Solving Hard Problem Sets

The motivating idea behind this strategy is simple: your brain can only work productively on a hard problem for 1 -3 hours before needing to reboot. To reboot your brain, so more productive work can be accomplished, requires a significant break. Preferably overnight.

Here’s a four step strategy built around this idea. It mimics the work schedule of the typical high-scoring technical student.

Step 1: Pick Off the Simple, Prime the Hard

Your first block of work should occur early in the week. Set aside 2 – 3 hours, in the morning. Make this the first thing you do that day (when your energy is at its highest). Your goal is two-fold. First, you want to solve easy problems. Your strong focus will help you avoid stupid mistakes. Second, you want to tackle at least two hard problems. You probably won’t solve them. This is why they are hard. But you can do something almost as important: prime them.

To prime a hard problem is to discover exactly why you can’t solve it. Pick an obvious approach — even if you suspect it won’t work — and start working through the problem until you get stuck. Identify why you are stuck. Ask what you need to figure out to make progress. What is it that makes this hard? Then take a break…

Step 2: Think in the Shower

For the next 2 – 3 days, think about how to get around the obstacles you discovered while priming. Don’t do this formally, in the library, with books around you. Instead, do this while walking around campus. While waiting for class to start. In the shower. I used to solve my Algorithms take home exam problems, for example, while jogging.

This is when breakthroughs occur. If you end up with a great insight, take 20 minutes, next time you can spare it, to sit down and write it down formally. If needed, prime a new hard problem so you can keep making progress as your wander campus throughout the week.

If you encounter ambiguities in the problem description that are giving your trouble, send concise questions to your TA requesting clarification. You don’t want these details to slow down progress any longer than they need to. (You might end up e-mailing your TA many times early in the week. This is okay so long as the questions are specific and concise. Don’t wait until office hours. By then, it’s too late.)

Step 3: Meet with your Problem Partner

A team effort is crucial for problem sets. But it has to be the right effort. Don’t meet with a large group. These are rarely efficient. Most of the time is spent griping about the class. Usually, there is one kid in the group who actually did the work, and, in the end, everyone copies off of him. Avoid this. The “smart kid” is often wrong, and likes the group because it boosts his self-esteem. Not to mention that your lack of understanding will come back to tag you on the exam.

The other extreme is to work alone. I see this a lot at MIT. Too many movies like Good Will Hunting got people thinking that to be smart at math means you should be able to stare at a problem for 5 – 10 seconds and then instantly solve it. Sorry. Doesn’t work that way. I walk past real geniuses every day — people, for example, who are my age and are also tenured professors — and guess what: it takes them a long time to solve hard problems; and they work with other people. The ideal configuration for a problem set is a single partner who is at roughly your ability and is willing to meet earlier in the week.

Meet with this partner for 2 – 3 hours to discuss progress made so far. Check your answers on the easy problems. Trade insights on the hard problems. Make new, collaborative attacks on those that still resist solving.

Step 4: Finalize the Problem Set at Office Hours

Show up early to office hours. Arrive understanding exactly why you are stuck on the small number of problems (hopefully) on which you are still stuck. Translate this into a small number of highly specific questions. Ask the TA these questions right after he or she arrives. The key here — and I base this on my own TA experience — is to avoidsimplying saying: “I don’t know how to do this problem, help!” That’s frustrating. Instead, you need targeted information that shows the effort you’ve expended. For example: “I’ve been trying approach XX, it’s promising, but I keep getting stuck with YY, can you point me in the right direction?”
Ten Criteria for Wicked Problems

Rittel and Webber characterise wicked problems by the following 10 criteria. (It has been pointed out that some of these criteria are closely related or have a high degree overlap, and that they should therefore be condensed into four or five more general criteria. I think that this is a mistake, and that we should treat these criteria as 10 heuristic perspectives which will help us better understand the nature of such complex social planning issues.)

1. There is no definite formulation of a wicked problem.

"The information needed to understand the problem depends upon one's idea for solving it. This is to say: in order to describe a wicked problem in sufficient detail, one has to develop an exhaustive inventory for all the conceivable solutions ahead of time." [This seemingly incredible criterion is in fact treatable. See below.]
2. Wicked problems have no stopping rules.

In solving a tame problem, "… the problem-solver knows when he has done his job. There are criteria that tell when the solution or a solution has been found". With wicked problems you never come to a "final", "complete" or "fully correct" solution - since you have no objective criteria for such. The problem is continually evolving and mutating. You stop when you run out of resources, when a result is subjectively deemed "good enough" or when we feel "we've done what we can…"
3. Solutions to wicked problems are not true-or-false, but better or worse.

The criteria for judging the validity of a "solution" to a wicked problem are strongly stakeholder dependent. However, the judgments of different stakeholders …"are likely to differ widely to accord with their group or personal interests, their special value-sets, and their ideological predilections." Different stakeholders see different "solutions" as simply better or worse.
4. There is no immediate and no ultimate test of a solution to a wicked problem.

"… any solution, after being implemented, will generate waves of consequences over an extended - virtually an unbounded - period of time. Moreover, the next day's consequences of the solution may yield utterly undesirable repercussions which outweigh the intended advantages or the advantages accomplished hitherto."
5. Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.

"… every implemented solution is consequential. It leaves "traces" that cannot be undone … And every attempt to reverse a decision or correct for the undesired consequences poses yet another set of wicked problems … ."
6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.

"There are no criteria which enable one to prove that all the solutions to a wicked problem have been identified and considered. It may happen that no solution is found, owing to logical inconsistencies in the 'picture' of the problem."
7. Every wicked problem is essentially unique.

"There are no classes of wicked problems in the sense that the principles of solution can be developed to fit all members of that class." …Also, …"Part of the art of dealing with wicked problems is the art of not knowing too early which type of solution to apply." [Note: this is very important point. See below.]
8. Every wicked problem can be considered to be a symptom of another [wicked] problem.

Also, many internal aspects of a wicked problem can be considered to be symptoms of other internal aspects of the same problem. A good deal of mutual and circular causality is involved, and the problem has many causal levels to consider. Complex judgements are required in order to determine an appropriate level of abstraction needed to define the problem.
9. The causes of a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem's resolution.

"There is no rule or procedure to determine the 'correct' explanation or combination of [explanations for a wicked problem]. The reason is that in dealing with wicked problems there are several more ways of refuting a hypothesis than there are permissible in the [e.g. physical] sciences."
10. [With wicked problems,] the planner has no right to be wrong.

In "hard" science, the researcher is allowed to make hypotheses that are later refuted. Indeed, it is just such hypothesis generation that is a primary motive force behind scientific development (Ritchey, 1991). Thus one is not penalised for making hypothesis that turn out to be wrong. "In the world of … wicked problems no such immunity is tolerated. Here the aim is not to find the truth, but to improve some characteristic of the world where people live. Planners are liable for the consequences of the actions they generate …"
How, then, does one tackle wicked problems? Some 20 years after Rittel & Webber wrote their article, Jonathan Rosenhead (1996), of the London School of Economics, presented the following criteria for dealing with complex social planning problems - criteria that were clearly influenced by the ideas presented by Rittle, Webber and Ackoff.
Accommodate multiple alternative perspectives rather than prescribe single solutions
Function through group interaction and iteration rather than back office calculations
Generate ownership of the problem formulation through stakeholder participation and transparency
Facilitate a graphical (visual) representation of the problem space for the systematic, group exploration of a solution space
Focus on relationships between discrete alternatives rather than continuous variables
Concentrate on possibility rather than probability
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby Wombaticus Rex » Sun Nov 04, 2012 1:50 pm

Via LessWrong -- an extensive summary / precis of an excellent book on Bayesian Reasoning, The Theory that Would Not Die, which is both fairly new and published by a University press so it's relatively expensive. This does a good job of making it more accessible:

Long, worth printing.
User avatar
Wombaticus Rex
Posts: 10608
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Thu Nov 08, 2012 1:12 am
Benevolent Dictator for Life (BDFL) is a title given to a small number of open-source software development leaders, typically project founders who retain the final say in disputes or arguments within the community.
The coinage originated in 1995 with reference to Guido van Rossum, creator of the Python programming language.[1][2] Shortly after van Rossum joined the Corporation for National Research Initiatives (CNRI), it appeared in a follow-up mail by Ken Manheimer to a meeting trying to create a semi-formal group that would oversee Python development and workshops.[1]
BDFL should not be confused with the more common term for open-source leaders, "benevolent dictator", which was popularized by Eric S. Raymond's essay "Homesteading the Noosphere" (1999).[3] Among other topics related to hacker culture, Raymond elaborates on how the nature of open source forces the "dictatorship" to keep itself benevolent, since a strong disagreement can lead to the forking of the project under the rule of new leaders.
[edit]Examples of people sometimes referred to as Benevolent Dictators for Life

Guido van Rossum, creator of Python[4][5]
Linus Torvalds, creator of Linux[6]
David Heinemeier Hansson, creator of Ruby on Rails[7]
Mark Shuttleworth calls himself the "Self-Appointed Benevolent Dictator for Life", or SABDFL, and the Ubuntu community often uses this name to refer to him[8][9]
Alexandre Julliard maintainer of Wine
Adrian Holovaty and Jacob Kaplan-Moss, for Django[10][11]
Dries Buytaert, founder and leader of the Drupal project[12][13]
Rasmus Lerdorf, the creator of PHP[14]
Larry Wall, creator of Perl[15]
Patrick Volkerding, the creator of Slackware, a popular linux distribution.
Daniel Robbins, the creator of the Gentoo and Funtoo Linux distributions. He is currently the Chief Architect of Funtoo Linux.

Topic modeling, a field of data science for finding abstract topics inside of raw data, aka too long didn't read, but please keep sending me email because I'm feeding it to my bot:
The Stanford Topic Modeling Toolbox (TMT) brings topic modeling tools to social scientists and others who wish to perform analysis on datasets that have a substantial textual component. The toolbox features that ability to:

Import and manipulate text from cells in Excel and other spreadsheets.
Train topic models (LDA, Labeled LDA, and PLDA new) to create summaries of the text.
Select parameters (such as the number of topics) via a data-driven process.
Generate rich Excel-compatible outputs for tracking word usage across topics, time, and other groupings of data.

Friendly reminder that women generally have more spending power, more wealth, and are seen as significantly more intelligent, compassionate and honest than men:

Studies show that women are responsible for buying 80% of household goods.

Although it is often played down, it is clear that women have a great deal of influence in the economy as consumers, in other words, a lot of spending power.

Increasingly, women take responsibility for buying larger items such as houses and cars. And women are also often responsible for buying gifts on behalf of their families. When kids go to birthday parties, it is usually the mother who purchases and wraps the gift. It often works the same way when a couple attends a wedding or anniversary. Women are faced with endless choices and decisions in their lives as consumers ... graph.html
In developing nations, women’s earned income is growing at 8.1 percent, compared to 5.8 percent for men. Globally, women control nearly $12 trillion of the $18 trillion total overall consumer spending, a figure predicted to rise to $15 trillion by 2014.


The gist of that is that any exercise in economic or social power hinges almost entirely on women. This has shaped decades of marketing and political intercourse, and will continue to do so. Want to move to a post scarcity economy? Keep it simple and add a human element to the media product. When in doubt watch Battlestar Galactica, Sy Fy channel has nailed broad appeal. Game of Thrones also.


I don't think you're still paying attention

Markov Chain Monte Carlo explanation:
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby semper occultus » Wed Dec 12, 2012 1:45 pm

Peter Thiel on The Future of Legal Technology - Notes Essay

Here is an essay version of my notes from Peter Thiel’s recent guest lecture in Stanford Law’s Legal Technology course. As usual, this is not a verbatim transcript. Errors and omissions are my own. Credit for good stuff is Peter’s.

When thinking about the future of the computer age, we can think of many distant futures where computers do vastly more than humans can do. Whether there will eventually be some sort of superhuman-capable AI remains an open question. Generally speaking, people are probably too skeptical about advances in this area. There’s probably much more potential here than people assume.

It’s worth distinguishing thinking about the distant future—that is, what could happen in, say, 1,000 years—from thinking about the near future of the next 20 to 50 years. When talking about legal technology, it may be useful to talk first about the distant future, and then rewind to evaluate how our legal system is working and whether there are any changes on the horizon.

I. The Distant Future

The one thing that seems safe to say about the very distant future is that people are pretty limited in their thinking about it. There are all sorts of literary references, of course, ranging from 2001: A Space Odyssey to Futurama. But in truth, all the familiar sci-fi probably has much too narrow an intuition about what advanced AI would actually look like.

This follows directly from how we think about computers and people. We tend to think of all computers as more or less identical. Maybe some features are different, but the systems are mostly homogeneous. People, by contrast, are very different from one another. We look at the wide range of human characteristics—from empathy to cruelty, kindness to sociopathy—and perceive people to be quite diverse. Since people run our legal system, this heterogeneity translates into a wide range of outcomes in disputes. After all, if people are all different, it may matter a great deal who is the judge, jury, or prosecutor in your case. The converse of this super naive intuition is that, since all computers are the same, an automized legal system would be one in which you get the same answer in all sorts of different contexts.

This is probably backwards. Suppose you draw 3 concentric circles on a whiteboard: one dot, a ring around that dot, and a larger circle around that ring. The range of all possible humans best corresponds with the dot. The ring around the dot corresponds to all intelligent life forms; it’s a bigger range comprised of the superset of all humans, plus Martians, Alpha Centaurians, Andromedans, and so on. But the diversity of intelligent life is still constrained by evolution, chemistry, and biology. Computers aren’t. So the set of all intelligent machines would be the superset of all aliens. The range and diversity of possible computers is actually much bigger than the range of possible life forms under known rules.


What Hal will be like is thus a much harder question than knowing what would happen if Martians took control of the legal system.

The point is simply this: we have all sorts of these intuitions about computers and the future, and they are very incomplete at best. Implementation of all these diverse machines and AIs might produce better, worse, or totally incomprehensible systems. Certainly we hope for the former as we work toward building this technology. But the tremendous range these systems could occupy is always worth underscoring.

II. The Near Future

Let’s telescope this back to the narrower question of the near future. Forget about 1,000 years from now. Think instead what the world will look like 20 to 50 years from now. It’s conceivable, if not probable, that large parts of the legal system will be automated. Today we have automatic cameras that give speeding tickets if you drive too fast. Maybe in 20 years there will be a similarly automated determination of whether you’re paying your taxes or not. There are many interesting, unanswered questions about what these systems would be like. But our standard intuition is that it’s all pretty scary.

This bias is worth thinking really hard about. Why do we think that a more automated legal future is scary? Of course there may be problems with it. Those merit discussion. But the baseline fear of computers in the near term may actually tells us quite a bit about our current system.

A. Status Quo Bias

Let’s look at our current legal system de novo. Arguably, it’s actually quite scary itself. There are lots of crimes and laws on the books—so many, in fact, that it’s pretty obvious that the system simply wouldn’t work if everybody were actually held accountable for every technical violation. You can guess the thesis of Silverglate’s book Three Felonies A Day. Is that exaggerated? Maybe. But one suspects there’s a lot to it.

The drive for regulation and enforcement by inspection isn’t new or unique to America, of course. In 1945, the English playwright J.B. Priestley wrote a play called An Inspector Calls. The plot involves the mysterious death of a nanny who was working for an upper middle class family. The family insists it was just suicide, but an inspector investigates and finds that the family actually did all these bad things to drive the girl to suicide. The subtext is all of society is like this. The play opened in 1945 at the Bolshevik Theatre in Stalinist Russia. The last line was: “We must have more inspectors!” And the curtains closed to thunderous applause.

B. Fear of the Unknown

Despite firsthand knowledge of what bureaucracy can do, we tend to think that it is a computerized legal system that would be incredibly draconian and totalitarian. For some reason, there is a big fear of automatic implementation and it gets amplified as people extrapolate into the future.

The main pushback to this view is that it ignores the fact that the status quo is actually quite bad. Very often, justice isn’t done. Too often, things are largely arbitrary. Incredibly random events shape legal outcomes. Do people get caught? Given wide discretion, what do prosecutors decide to do? What goes on during jury selection? It seems inarguable that, to a large extent, random and uncertain processes determine guilt or liability. This version isn’t totalitarian, but it’s arbitrary all the same. We just tend not to notice because most of the time we get off the hook for stuff we do. So it sort of works.

C. Deviation from Certainty

But what is the nature of the randomness? That our legal system deviates from algorithmic determinism isn’t necessarily bad. The question is whether the deviation is subrational or superrational. Subrational deviation involves things that don’t make sense, but rather just happen for no reason at all. Maybe a cop is upset about something from earlier in the day and he takes it out on you. Or maybe the people on the jury don’t like how you look. People don’t like to focus on these subrational elements. Instead they prefer to talk as if all deviation were superrational: what’s arbitrary is not in fact arbitrary, but rather is perfect justice. Things are infinitely complex and nuanced. And our current system—but not predictable computers—appropriately factors all that in.

That narrative sounds good, but it probably isn’t true. Most deviation from predictability in our legal system is probably subrational deviation. In many contexts, this doesn’t matter all that much. Take speeding tickets, for example. Everyone gets caught occasionally, with roughly the same frequency. Maybe a system with better enforcement and lesser penalties would be slightly better, but one gets the sense that this isn’t such a big deal.

But there are more serious cases where the sub- vs. superrational nature of the deviation matters more. Drug laws are one example. This past election, Colorado voters just voted to legalize marijuana there. California has done something functionally similar by declaring that simple possession is not an enforcement priority. But that’s only at the state level; possession remains illegal and enforced under federal law. Violation of the federal statute can and does mean big jail time for people who get caught. But the flipside is that there aren’t many federal enforcers, and these states aren’t inclined to enforce the federal law themselves. So people wind up having to do a bunch of probabilistic math. Maybe a regime in which you have a 1 in 1,000 chance of going to jail for a term of 1,000 days works reasonably well. But arguably it’s quite arbitrary; getting caught can feel like getting hit with a lightening bolt. Much better would be to have 1,000 offenders each go to jail for a day.

III. A (More) Transparent Future

It may be that the usual intuition is precisely backwards. Computerizing the legal system could make it much less arbitrary while still avoiding totalitarianism. There is no reason to think that automization is inherently draconian.

Of course, automating systems has consequences. Perhaps the biggest impact that computer tech and the information revolution have had over last few decades has been increased transparency. More things today are brought to the surface than ever before in history. A fully transparent world is one where everyone gets arrested for the same crimes. As a purely descriptive matter, our trajectory certainly points in that direction. Normatively, there’s always the question of whether this trajectory is good or bad.

It’s hard to get a handle on the normative aspect. What does it mean to say that “transparency is good”? One might say that transparency is good because its opposite is criminality, which we know is bad. If people are illegally hiding money in Swiss bank accounts, maybe we should make all that transparent. But it’s just as easy to claim that opposite transparency is privacy, which we also tend to believe is good. Few would argue that the right to privacy is the same thing as the right to commit crimes in total secrecy.

One way to these questions is to first distinguish the descriptive and the normative and then hedge. Yes, the shift toward transparency has its problems. But it’s probably not reversible. Given that it’s happening, and given that it can be good or bad depending on how we adjust, we should probably focus on adjusting well. We’ll have to rethink these systems.

A. Transparency and Procedure

In some sense, Computers are inherently transparent. Almost invariably, codifying and automating things makes them more transparent. From the computer revolution perspective, transparency involves more than simply making people aware of more information. Things become more transparent in a deeper, structural sense if and when code determines how they must happen.

One considerable benefit of this kind of transparency is that it can bring to light the injustices of existing legal or quasi-legal systems. Consider the torture scandals of the last decade. This got a lot of attention when information about what kinds of abuse were going on was published. This, in turn, led to a lot of changes in process, with the end result being a rather creepy formalization under which you can sort of dunk prisoners in water… but don’t you dare shock them.

Why the drive toward transparency? One theory is that lower level people were getting pretty nervous. They understandably wanted the protection of clear guidelines to follow. They didn’t have those guidelines because the higher ups in the Bush administration didn’t really understand how the world was changing around them. So it all came to a head. In an increasingly transparent world, torture gets bureaucratized. And once you formalize and codify something, you can bring it to the surface and have a discussion about whatever injustice you may see.

If you’re skeptical, ask yourself which is safer: being a prisoner at Guantanamo or being a suspected cop killer in New York City. Authorities in the latter case are pretty careful not to formalize rules of procedure. It seems reasonable to assume that’s intentional.

B. Would Transparency Break The Law?

The overarching, more philosophical question is how well a more transparent legal system would work. Transparency makes some systems work better, but it can also make some systems worse.

So which kind of system is the legal system? Maybe it’s like the stock market, which automation generally makes more efficient. Instead of only being able to trade to an eighth of a share, you can now trade to the penny. Traders now have access to all sorts of metrics like bidder volume. Things have become less arbitrary, more precise, and more efficient. If the law is mostly rational, and just slightly off, it may be the case that you can tweak things and make it right with a little automation.

Other systems aren’t like this at all. Many things only work when they are done in the dark, when no one knows exactly what’s going on. The phenomenon of scapegoating is a good example. It only works when people aren’t aware of it. If you were to say “We have a serious problem in the community. No one is happy. We need psychosocial process whereby we can designate someone as a witch and then burn them in order to resolve all this tension,” the idea would be ruined. The whole thing only works if people remain ignorant about it.

The question can thus be reduced to this: is the legal system pretty just already, and perfectible like a market? Or is it more arbitrary and unjust, like a psychosocial phenomenon that breaks down when illuminated?

The standard view is the former, but the better view is the latter. Our legal system is probably more parts crazed psychosocial phenomenon. The naïve rationalistic view of transparency is the market view; small changes move things toward perfectibility. But transparency can be stronger and more destructive than that. Consider the tendency to want to become vegan if you watch a bunch of foie gras videos on YouTube. Afterwards, you’re not terribly concerned about small differences in production techniques or the particulars of the sourcing of the geese. Rather, you have seen the light, and have a big shift in perspective. Truly understanding our legal system probably has this same effect; once you throw more light on it, you’re able to fully appreciate just how bad things are underneath the surface.

C. Law and Order

Once you start to suspect that the status quo is quite bad, you can ask all sorts of interesting questions. Are judges and juries rational deliberating bodies? Are they weighing things in a careful, nuanced way? Or are they behaving irrationally, issuing judgments and verdicts that are more or less random? Are judges supernaturally smart people? The voice of the people? The voice of God? Exemplars of perfect justice? Or is the legal system really just a set of crazy processes?

A good rule of thumb in business is to never get entangled in the legal system in any way whatsoever. Invariably it’s an arbitrary and expensive distraction from what you’re actually trying to do. People underestimate the costs of engaging with plaintiff’s lawyers. It’s very easy to think: “Well, they’re just bringing a case. It will cost a little bit, but ultimately we will figure out the truth.” But that’s pretty idealized. If you’re dealing with a crazy arbitrary system and you never actually know what could happen to you, you end up negotiating with plaintiff’s lawyers just like the government negotiates with terrorists: not at all, except in every specific instance. When the machinery is too many parts random and insane, you always find a way to pay people off.

Looking forward, we can speculate about how things will turn out. The trend is toward automization, and things will probably look very different 20, 50, and 1000 years from now. We could end up with a much better or much worse system. But realizing that our baseline may not be as good as we tend to assume it is opens up new avenues for progress. For example, if uniformly enforcing current laws would land everyone in jail, and transparency is only increasing, we’ll pretty much have to become a more tolerant society. By placing the status quo in proper context, we will get better at adjusting to a changing world.

Questions from the Audience:

Question from the audience: Judge Posner recently opined in a blog post that humans don’t have free will. He argued that it is not objectionable to heavily tax wealthy people because, things being thoroughly deterministic, they made their fortunes through random chance and luck. If the free will point is true, there are also implications for criminal law, since there’s no point punishing people who are not morally culpable. How do you see technological advance interacting with the questions of free will, determinism, and predicting people’s behavior?

Peter Thiel: There are many different takes on this. For starters, it’s worth noting that any one big movement on this question might not shake things up too much. Maybe you don’t aim for retribution on people who aren’t morally culpable. But there are other arguments for jail even if you don’t believe in free will. Since there are several competing rationales for the criminal justice system, practically speaking it may not matter.

More abstractly, it seems clear that we are headed towards a more transparent system. But there are layers and layers of nuance on what that means and how that happens. There is no one day where some switch will be flipped and everything is illuminated. Theoretically, if you could flip that switch and determine all the precise causal connections between things, you would know how everything worked and could create that perfectly just system. But philosophically and neurobiologically, that is probably very far away. Much more likely is a rolling wave of transparency. More things are transparent today than in the past. But there’s a lot that is still hidden.

The order of operations—that is, the specific path the transparency wave takes—matters a great deal too. Take something like WikiLeaks. The basic idea was to make transparent the doings of various government agencies. One of the critical political/legal/social questions there was what became transparent first: all the bad things the US government was doing? Or the fact that Assange was assaulting various Swedish groupies? The sequence in which things become transparent is very important. Some version of this probably applies in all cases.

I agree with Posner that transparency often has a corrosive undermining effect. Existing institutions aren’t geared for it. I do suspect that people’s behavior still responds to incentives in some ways, even if there is no free will in the philosophical, counterfactual sense of the word. But I am sympathetic to part of the free will argument because, if you say that free will exists, you’re essentially saying two things:

1.the cause of your behavior came from within you, i.e. you were an unmoved mover, and;
2.that you could have done otherwise, in a counterfactual world.
But if you combine those two claims, the resulting world seems strange and implausible.

Practically, free will arguments are worth scrutiny. Ask yourself: in criminal law, which side makes arguments about free will? Invariably the answer is the prosecution. The line goes: “You killed this person. It was your decision to do that. You’re not even deformed; that’s an extrinsic factor. Rather, you are intrinsically evil.” Anyone who is skeptical about excessive prosecution should probably be skeptical about free will in law. But it makes sense to be less skeptical about it as a philosophical matter.

Question from the audience: There’s the AI joke that says that cars aren’t really autonomous until you order them to go to work and they go to the beach instead. What do you think about the future of encoding free will into computers? Can we imagine mens rea in a machine.

Peter Thiel: In practice it’s most useful to think of questions about free will as political questions. People bring up free will when they want to blame other people.

Theoretically, the nexus between free will and AI does raise interesting questions. If you turn the computer off, are you killing it? There are many different versions of this. My intuition is that we’re really bad at answering these questions. Common sense doesn’t really work; it’s likely to be so off that it’s just not helpful at all. This stuff may just be too weird to figure out in advance. Maybe the biggest lesson is that we should just be skeptical of our intuitions. So I’ll be skeptical of my intuitions, and will not answer your question.

Besides, the easier things are the near term things. Short of full-blown AI, we can automate certain processes and reap large efficiency gains while also avoiding qualms about about turning the computers off at night. We should not conflate super intelligent computers with very good, but still dumber-than-human computers that do things for us. In the near term, we should welcome transparency and automation in our political and legal structures because this will force us to confront present injustices. The fear that all this leads to a Kafkaesque future isn’t illegitimate, but it’s still very speculative.

Question from the audience: How could you ever design a system that responds unpredictably? A cat or gorilla responds to stimulus unpredictably. But computers respond predictably.

Peter Thiel: There are a lot of ways in which computers already respond unpredictably. Microsoft Windows crashes unpredictably. Chess computers make unpredictable moves. These systems are deterministic, of course, in that they’ve been programed. But often it’s not at all clear to their users what they’ll actually do. What move will Deep Blue make next? Practically speaking, we don know. What we do know is that the computer will play chess.

It’s harder if you have a computer that is smarter than humans. This becomes almost a theological question. If God always answers your prayers when you pray, maybe it’s not really God; maybe it’s a super intelligent computer that is working in a completely determinate way.

Question from the audience: One problem with transparency is that it can delegitimize otherwise legitimate authority. For instance, anyone can blog and post inaccurate or harmful information, and the noise drowns out more legitimate information. Couldn’t more transparency in the legal system actually be harmful because it would empower incorrect or illegitimate arguments?

Peter Thiel: This question gets at why it’s important to have an incremental process towards full transparency instead a radical shift. There are certainly various countercurrents that could emerge.

But generally speaking the information age has tended to result in more homogenization of thought, not less. It just doesn’t seem true that transparency has enabled more isolated communities of belief to disingenuously tap into various shreds of data and thereby maintain edifice where they couldn’t have before. It’s probably harder to start a cult today than it was in the ‘60s or ‘70s. Even though you have more data to piece together, your theory would get undermined and attacked from all angles. People wouldn’t buy it. So the big risk isn’t that excessively weird beliefs are sustained, but rather that we end up with one homogenized belief structure under which people mistakenly assume that all truth is known and there’s nothing left to figure out. This is hard to prove, of course. It’s perhaps the classic Internet debate. But generally the Internet probably makes people more alike than different. Think about the self-censorship angle. If everything you say is permanently archived forever, you’re likely to be more careful with your speech. My biggest worry about transparency is that it narrows the range of acceptable debate.

Question from the audience: How important is empathy in law? Human Rights Watch just released a report about fully autonomous robot military drones that actually make all the targeting decisions that humans are currently making. This seems like a pretty ominous development.

Peter Thiel: Briefly recapping my thesis here should help us approach this question. My general bias is pro-computer, pro-AI, and pro-transparency, with reservations here and there. In the main, our legal system deviates from a rational system not in a superrational way—i.e. empathy leading to otherwise unobtainable truth—but rather in subrational way, where people are angry and act unjustly.

If you could have a system with zero empathy but also zero hate, that would probably be a large improvement over the status quo.

Regarding your example of automated killing in war contexts—that’s certainly very jarring. One can see a lot of problems with it. But the fundamental problem is not the machines are killing people without feeling bad about it. The problem is simply that they’re killing people.

Question from the audience: But Human Rights Watch says that the more automated machines will kill more people, because human soldiers and operates sometimes hold back because of emotion and empathy.

Peter Thiel: This sort of opens up a counterfactual debate. Theory would seem to go the other way: more precision in war, such that you kill only actual combatants, results in fewer deaths because there is less collateral damage. Think of the carnage on the front in World War I. Suppose you have 1,000 people getting killed each day, and this continues for 3-4 years straight. Shouldn’t somebody have figured out that this was a bad idea? Why didn’t the people running things put an end to this? These questions suggest that our normal intuitions about war are completely wrong. If you heard that a child was being killed in an adjacent room, your instinct would be to run over and try to stop it. But in war, when many thousands are being killed… well, one sort of wonders how this is even possible. Clearly the normal intuitions don’t work.

One theory is that the politicians and generals who are running things are actually sociopaths who don’t care about the human costs. As we understand more neurobiology, it may come to light that we have a political system in which the people who want and manage to get power are, in fact, sociopaths. You can also get here with a simple syllogism: There’s not much empathy in war. That’s strange because most people have empathy. So it’s very possible that the people making war do not.

So, while it’s obvious that drones killing people in war is very disturbing, it may just be the war that is disturbing, and our intuitions are throwing us off.

Question from the audience: What is your take on building machines that work just like the human brain?

Peter Thiel: If you could model the human brain perfectly, you can probably build a machine version of it. There are all sorts of questions about whether this is possible.

The alternative path, especially in the short term, is smart but not AI-smart computers, like chess computers. We didn’t model the human brain to create these systems. They crunch moves. They play differently and better than humans. But they use the same processes. So most AI that we’ll see, at least first, is likely to be soft AI that’s decidedly non-human.

Question from the audience: But chess computers aren’t even soft AI, right? They are all programmed. If we could just have enough time to crunch the moves and look at the code, we’d know what/s going on, right? So their moves are perfectly predictable.

Peter Thiel: Theoretically, chess computers are predictable. In practice, they aren’t. Arguably it’s the same with humans. We’re all made of atoms. Per quantum mechanics and physics, all our behavior is theoretically predictable. That doesn’t mean you could ever really do it.

Comment from the audience: There’s the anecdote of Kasparov resigning when Deep Blue made a bizarre move that he fatalistically interpreted as a sign that the computer had worked dozens of moves ahead. In reality the move was caused by a bug.

Peter Thiel: Well… I know Kasparov pretty well. There are a lot of things that he’d say happened there…

Question from the audience: I’m concerned about increased transparency not leaving room for tolerable behavior that’s not illegal. What’s your take on that?

Peter Thiel: That we are generally heading toward more transparency on a somewhat unpredictable path is a descriptive claim, not a normative one. This probably can’t be reversed; it’s hard to stop the arc of history. So we have to manage as best we can.

Certain things become harder to do in a more transparent world. Government, for example, might generally work best behind closed doors. Consider the fiscal cliff negotiations. If you said that they had to take place in front of C-SPAN cameras, things might work less well. Of course, it’s possible that they’d work better. But the baseline question is how good or bad the current system is. My view is that it’s actually quite bad, which is why greater transparency is more likely to be good for it.

I spoke with high-ranking official fairly recently about how Facebook is making things more transparent. This person believed that government only works when it’s secret—a “conspiracy against the people, for the people”sort of narrative. His very sincerely held view was that our government essentially stopped working during the Nixon administration, and we haven’t had a functioning government in this country for 40 years. No one can have a strategy. No one can write notes. Everything is recorded and everything becomes a part of history. We can sympathize with this, in that it’s probably very frustrating for officials who are trying to govern. But normatively, perhaps it’s a good thing if we no longer have a functioning government. All it ever really did well was kill people.

If you believe the stories that most people tell—the government is doing public good, and there’s a sense of superhuman rationality to it—transparency will shatter your view. But if you think that our system is incredibly broken and dysfunctional in many ways, transparency forces discussion and retooling. It affords us a chance to end up with a much more tolerant, if very different, world.

Question from the audience: Can you explain what bringing more transparency to government or the legal system would look like? How, specifically, does automating legal system lead to transparency?

Peter Thiel: Transparency can mean lots of things. We must be careful how we use the term. But take the simple example of people taking cell phone pictures of cops arresting people. That would make police-civilian interactions more transparent, in the thinnest sense. Maybe you find out that there are shockingly few procedural violations and that police are really well behaved. If so, this will increase confidence and make a good system even better. Of course, the reality may be that this transparency will expose the violations and arbitrariness in a bad system.

Capital punishment is another example. DNA testing can be seen as adding another layer of transparency to the system. It turns out that something like 20% of people accused of committing a capital crime are wrongly accused. That figure seems extraordinarily high; you’d think that with capital crimes, investigations would be much more serious and thorough and consequently there would be a very low rate of nabbing the wrong person. Today we’re increasingly skeptical of the justice of capital punishment, and for good reason. If the DNA tests had shown that we’ve never ever made an ID mistake in a capital case, we’d probably think very differently about our system.

The general insight is that as you codify things, you tend to bring to the surface what’s actually going on. One of the virtues of a more automated system is that it’s easier to describe accurately. You can actually understand how it works. At least in theory, you bring injustice to light. In practice, you’d then have to change the injustice. And you can’t do that if you don’t know about it.

Question from the audience: Doesn’t transparency to whom matter more than just transparency? Transparency to the programmer re witch-hunting doesn’t expose the existence of witch-hunting to society, right? Should government software be open sourced?

Peter Thiel: I’ll push back on that question a little bit. Just because you have an algorithm doesn’t mean people will always know what it will do—this is the chess computer example again. It’s very possible that people wouldn’t understand some things even with transparency. We have transparency on the U.S. budget, but no one in Congress can actually read or understand it all.

It’s a big mistake to think that one system can be completely transparent to everybody. It’s better to think in terms of many hidden layers that only gradually get uncovered.

Question from the audience: Since there are different countries, there are obviously multiple legal systems that interact, not just one legal system. Is it problematic that we won’t see the same transparency in some systems that we will in others?

Peter Thiel: Again, the push back is that transparency isn’t a unitary concept. The sequencing path is really important. Does the government get more transparency into the people? The people into the government? Government into itself, and the machine just works more efficiently? Depending on just how you sequence it, you can end up with radically different versions.

Look at Twitter and Facebook as the related to the Arab Spring. Which way do these technologies cut in terms of transparency? In 2009, the Iranian government hacked Twitter and used it to identify and locate dissidents. But in Tunisia and Egypt, the numerous protest posts and tweets helped people realize that they weren’t the only ones who were unhappy. The exact same software plays out in extremely different ways depending on the sequencing.

Question from the audience: Is there a point in time where we just shift from current computers to future computers? Or does technological advance follow a gradual spectrum?

Peter Thiel: Maybe there’s a categorical difference at some point. Or maybe it’s just quantitative. It’s conceivable that as some point things are just really, really different. The 20-year story about greater transparency is one where you can make reasonable predictions as to what computers will likely do and what they’re likely to automate, even though the computers themselves will be a little different. But 1,000 years out is much more opaque. Will the computers be just or unjust? We have no good intuition about that. Maybe they’ll be more like God, or we’ll be dealing with something beyond good and evil.

Question from the audience: Traffic cameras are egalitarian. But cops might be racist. Do you think we run the risk of someday having racist or malicious computers?

Peter Thiel: In practice, we can still generally understand computers somewhat better than we can understand people. In the near term at least, more computer automation would produce systems that are more predictable and less arbitrary. There would be less empathy but also less hate.

In the longer term, of course, it could be just the opposite. There may be real problems there. But key to understand is that we’re experiencing an irreversible shift toward greater transparency. This is true whether your time horizon is long-term, where things are mysterious and opaque, or short-term, where things become automized and predictable. Naturally, you have to get to the short-term first. So we should first realize the gains there, and we can figure out any long-term problems later.

User avatar
semper occultus
Posts: 2901
Joined: Wed Feb 08, 2006 2:01 pm
Location: London,England
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Mon Feb 11, 2013 1:38 pm

August 2010 Gartner Research

July 2011 Gartner Research

July 2012 Gartner Research

What changed?

Technologies that were slated to reach peak productivity went from 5-10 years out to 2-5 year range, over the course of about 1 year. Many stayed the same, slowly chugging along gaining steam assisted by their interconnection to other technologies.

Something I've been thinking about a lot - sometimes international events are much more predictable than domestic ones. Is it because of complexity or just because of the speed of operations? I'm not sure either way. And remembering the Zen of Python: "In the face of ambiguity, refuse the temptation to guess." So instead we look for certainties.

Israel: ... 57494.html
Based on a system of proportional representation, where voters select a party instead of a specific individual, Israeli elections have always resulted in coalition governments. To win an outright majority, a party would need to win more than 60 seats in the 120-member Knesset.

The minimum required for a party to win a Knesset seat is two percent of the total votes cast.

The prime minister is normally - but not always - the leader of the party that wins the most seats. He or she then negotiates with other parties to form a governing coalition.

In the 2009 election, although the incumbent prime minister's party, Kadima, won the most seats in the parliament, the Likud leader Binyamin Netanyahu was able to form a majority coalition government and become the new prime minister.

In order to secure the 2013 elections, the Likud party merged with Yisrael Beiteinu to ensure Netanyahu's reelection. They secured 31 seats, a small loss from their previous position.

The right-wing Haredi ultra-orthodox Jews are 10% of the population and have a voter turnout of closer to 80%, versus 59-65% for the rest of the population. The Haredi have a very high reproductive rate of about 8.9 children per couple, have a high unemployment rate of 65% (versus 8% for total population) and only give their children religious studies after primary school. This makes them very dependent on government aid as they have no modern skills and also very predictable and loyal voters: ... e-20100511
Officially, Israel’s unemployment rate is about 8%. But that doesn’t include Israeli citizens who are not trying to find work, either because they feel disenfranchised, such as many Arab Israelis, or because they’ve chosen a life of state-subsidized religious study, such as many ultra-Orthodox Jews.

Nearly 27% of Arab men and 65% of ultra-Orthodox Jews don’t work, government figures show. The non-employment rate for ultra-Orthodox men has tripled since 1970, Ben-David said. ... nt=2435777
Since it also has been traditional for most Haredi men not to work at making a living, but to devote themselves to religious studies, the political parties, often holding the balance of power in coalition governments, also became useful for extracting financial concessions from governments wanting to stay in power.

With a birthrate several times that of other Jewish Israelis, the Haredim have an average of 8.9 children per family and constitute more than 10 per cent of the population of seven million. ... oblem.html
Most ultra-Orthodox Jews lack the skills to work in a modern economy, having studied little or no math and science beyond primary school (their curriculum focuses almost entirely on religious texts such as the Torah and Talmud). As a result, more than 60 percent live below the poverty line, compared with 12 percent among non-Haredi Jews. Most also opt out of military service, which is compulsory for other Israelis. The net effect: as the Haredi community expands, the burden of both taxation and conscription falls on fewer and fewer Israelis.

According to pollsters, Haredim are consistently hawkish on the question of territorial compromise with the Palestinians, citing God’s covenant with Abraham granting Jews the land of Israel. Already the parties that represent them wield significant political power in Israel’s coalition-based system.

South Korea:

Essentially the entire country's economic output is from a few companies, which are directly managed and indirectly owned by less than a dozen families. What we would call "corruption" is standard operating procedure there. Even if a whistlerblower manages to avoid getting disappeared, they will never be able to get a regular job again because the companies are the economy. Any executives that get caught redhanded just get pardoned, life goes on.
In 2007, an appellate court suspended a three-year prison sentence given to Hyundai Motor Chairman Chung Mong-Koo for embezzlement. The presiding judge said the auto tycoon was too important to serve time.
Samsung’s Mr. Lee was convicted of tax evasion in 2008, fined and handed a suspended prison sentence. He resigned as chairman after his indictment, but returned to the post last year after a nearly two-year absence.
Many convicted executives and officials, including Mr. Lee and Mr. Chung, have received special presidential pardons. The aim is usually to foster reconciliation or pave the way for people with key knowledge, experience and skills to continue to contribute to the country’s development. ... 40551.html

The populace was fractured between voting for various independent and left-wing candidates and Park Geun-hye, the dead dictator's daughter. All of the smaller candidates left the elections to back the civil rights lawyer Moon Jae-in. Park won out, 51.6% to 48% versus Moon Jae-in. While Park did win the largest share of votes for any election in the country, I figured the gap would be wider. Park is the living avatar of the status quo . ... al-events/
1. Muddle-along rule

On and off for several decades, knowing analysts have forecast state collapse for Pakistan, the Philippines, Somalia, and other nations. Kyrgyzstan and Uzbekistan have been said to be destined for economic ruin, and North Korea for the ash heap of history. Yet they have gone on—often with the help of the global community, but gone on they have. The lesson is that countries tend to muddle along regardless of the trouble, and not collapse.

2. Precipice rule

A corollary to Rule No. 1. Even the most violence-riven nations tend not to plunge over the precipice, as it seems they might, but to pull back if only at the last moment and not devolve into utter chaos and ruin. Often they need help—last year, Kenya sent troops to Somalia to break up the radical militia al-Shabab, for example. But absent the Precipice Rule, Kenya’s intervention would not have worked: Somalians in fact did not wish to dive into the abyss. So al-Shabab could be uprooted.

3. Conspiracy rule

When you find a simple explanation for an event, the safest bet is to embrace it. To be sure, conspiracies exist—what would war be without them, for instance? But they are much rarer than many suppose. Generally speaking, groups of people do not successfully conceive and execute dastardly schemes; even if they want to, they are typically confounded by the compound physics of too many moving parts and human fallibility. (You can think of this as the Occam’s Razor of geopolitics.)

4. Economic/health/injustice rule

A desire for these three things—economic success, good health and justice—is the big driver in political revolt and revolution. The inflection point is when a critical mass of individuals despairs for the future of its children, and youths feel they can succeed only under a different circumstance. Governments generally do not fall over questions of liberty and political expression, which are not nearly as potent as a collective sense of injustice, helplessness or outrage over the security or health of their children. To battle this rule, a regime will try to change the subject (using the potent factors of Rule No. 14—nationalism, xenophobia, jingoism and fear of instability) and, if that fails, to scare the wits out of its population.

5. Idea rule

The most over-rated of the drivers of change. Political ideas and theories, even when they are brilliant, only very rarely gain the critical mass to move events. But occasionally they do. A case in point is the Arab Spring. Originally triggered by Rule No. 4, the Spring has spread and been sustained by the idea of the right to rise up.

6. Caesar rule

When states are muddling along, staying away from the precipice and not at the stage of revolt, as described in Rules No. 4 and 5, the only other way that dictators are typically ousted is defection or assassination. Generally speaking, a key ally or a few will either pull away from a ruler, causing an apparently strong edifice of power to crumble, or kill him outright.

7. Staying in power rule

The ultimate objective of almost every leader in the world. Governments including dictatorships may seem sclerotic, but can become among the most nimble of things when under existential threat. Keep this in mind when you are tempted to say, “He will never change. He has always been that way.”

8. Territory rule

Among the most powerfully visceral forces in politics. A threat to even the slenderest sliver of land can arouse the primal and uncontrolled indignation of a people. When territory is involved, common sense can vanish even among otherwise worldly and balanced leaders and their people, leading to brittle diplomacy and, if Rule No. 2 is not invoked, a drift toward war.

9. The rule of averages

As with most matters in life, events tend toward the average, the local version of the moderate middle. There can be periods of wild, insane extremism. But then people are prone to calm down, do business and seek strong, stable and bright futures for their children.

10. Big personality rule

What would Venezuela have been during the 2000s without Hugo Chavez? Libya during the last quarter of the 20th century without Moamar Gadhafi? Russia for the last dozen years without Vladimir Putin? For that matter, Great Britain in 1939 without Winston Churchill, and Vietnam in the 1950s and 1960s without Ho Chi Minh? In politics, personality matters, and big, idiosyncratic personalities move and dominate events.

No. 10 has symbiosis with the following two rules.

11. True-believer rule

While people and countries tend toward the middle, events can turn on exceptions operating on the extremes. Hitler’s Germany is an example. Today, Khamenei’s Iran, Afghanistan’s Taliban, Kim’s North Korea and Chávez’s Venezuela punch above their weight in influencing the geopolitical landscape.

12. Mountain rule

Like Rule No. 10, this is a direct carryover from the energy indicators. That is, certain countries are so large and their behavior so singular that their actions can create and disrupt economic and geopolitical trends. China, Russia, Saudi Arabia and the United States are among the Mountains. When one or more of them step into the picture, they can and do create news.

There are three corollaries to the Mountain Rule:

The future superpower corollary: China is not yet a military or economic power of the stature of the US, but since most assume it will be, it is more or less already treated and behaves as one. As a Mountain, it can and does shape and shift economic and political trends.

The former colonial/great power corollary: When you formerly were a great power, it is hard to give up the mantle. Such is the lot of countries like France and Great Britain. Though well past their great-power prime—and not Mountains in either case—both from time to time play outsized roles in big events, such as France’s 2011 intervention in Libya. A problem comes, however, when inflated former great-power thinking conflicts with current powers, in which case it is regarded as a nuisance. Such is the case of Russia, a Mountain whose often countervailing policies seem to be Moscow’s strategy for staying in the great-power game (see next corollary).

The perceived great power corollary: India, Iran and Turkey all perceive themselves as great powers (and in the latter two cases actually were a long time ago, and as such also fall under the previous corollary). So they can and do behave in ways that impact events far beyond their shores. India projects its weight around the Indian Ocean and the Subcontinent, Turkey around the Mediterranean and into Central Asia, and Iran around the Persian Gulf and into the Levant. Russia, having lost its great-power status in 1991, interjects its leverage wherever it sees a useful opening.

13. Getting-rich rule

Why do leaders act as they do? Often, look no further than personal enrichment.

14. Local politics rule

Most geopolitics begin at home. Whether deliberate or inadvertent, domestic politics are a crucial contextual determinant of future events. Among key local influences are xenophobia, nationalism and jingoism.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Mon Mar 04, 2013 12:08 am

This course starts tomorrow, I've already read the book so I'm recommending the course. It's free and once you sign up you will be able to view it forever:
Introduction to Mathematical Thinking
by Keith Devlin
The goal of the course is to help you develop a valuable mental ability – a powerful way of thinking that our ancestors have developed over three thousand years.

Mathematical thinking is not the same as doing mathematics – at least not as mathematics is typically presented in our school system. School math typically focuses on learning procedures to solve highly stereotyped problems. Professional mathematicians think a certain way to solve real problems, problems that can arise from the everyday world, or from science, or from within mathematics itself. The key to success in school math is to learn to think inside-the-box. In contrast, a key feature of mathematical thinking is thinking outside-the-box – a valuable ability in today’s world. This course helps to develop that crucial way of thinking.

The primary audience is first-year students at college or university who are thinking of majoring in mathematics or a mathematically-dependent subject, or high school seniors who have such a college career in mind. They will need mathematical thinking to succeed in their major. Because mathematical thinking is a valuable life skill, however, anyone over the age of 17 could benefit from taking the course.

Similarly, the current order of operations I am using for historical events is: People, Events, Ideology.

Ideology is always colored by the context in which it exists. Communism in the 20th century is colored by a post industrial-revolution world where the economy of scale favors large institutions that can spend large amounts of money for military build-up. Ideologies cannot be separated from the personalities behind them, coming to understand and even predict events requires knowledge of people.

In a vacuum of abstraction all sorts of spherical cows can be made in favor of ideology, only when you connect it to people can understand it's roots and it's future. At a macro-scale this is as simple as studying demographics to find hidden trends and details. At the micro-scale this requires a lot more psychology and listening to pry into the mind of people who do their best drive trends and events in their favor, be it an election cycle or a market. What can be extremely hard is made simple by moving to the right medium or doing the right order of operations, imagining 5 dimensions is impossible in practical terms but very easy to write out in mathematical form depending on the definition of dimension in the specific area of mathematics.

штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby JackRiddler » Mon Mar 04, 2013 12:42 am


We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
Posts: 15263
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Mon Mar 04, 2013 4:14 pm

Our retina is 2d, which through some psychological trickery we turn our raw input into 3d images. One of the interesting applications of sensors for the future: expand the human visual field beyond it's current limited state. Even though we are limited to perceiving in the 3rd dimension we still only process a tiny amount of what is occurring inside of it. ... us-system/
The most important function of the brain is figuring out what to ignore: Research suggests that we can process only about one percent of the visual information we take in at any given moment. That's one reason why, as augmented reality (AR) inches ever closer to prime time, researchers at the University of Tokyo tackled an issue that could be distracting and even dangerous: Clutter in the narrow portion of our visual field that is high resolution -- literally, the center of our attention.

Their solution is as straightforward as it is ingenious: display objects that demand attention in the user's peripheral vision as simple icons that can be processed even by the limited visual acuity of our peripheral vision. If a user wants more information, for example to read an email represented in the peripheral vision by an icon, simply concentrating on the object brings up a higher-resolution instance of it with as much attached information as necessary.

Scaling Up v. Scaling Out ... costs.html
The HP ProLiant DL785 G5 starts at $16,999 -- and that's barebones, with nothing inside. Fully configured, as Markus describes, it's kind of a monster:

7U size (a typical server is 2U, and mainstream servers are often 1U)
8 CPU sockets
64 memory sockets
16 drive bays
11 expansion slots
6 power supplies

It's unclear if they bought it pre-configured, or added the disks, CPUs, and memory themselves. The most expensive configuration shown on the HP website is $37,398 and that includes only 4 processors, no drives, and a paltry 32 GB memory. When topped out with ultra-expensive 8 GB memory DIMMs, 8 high end Opterons, 10,000 RPM hard drives, and everything else -- by my estimates, it probably cost closer to $100,000. That might even be a lowball number, considering that the DL785 submitted to the TPC benchmark website (pdf) had a "system cost" of $186,700. And that machine only had 256 GB of RAM. (But, to be fair, that total included another major storage array, and a bunch of software.)

At any rate, let's assume $100,000 is a reasonable ballpark for the monster server Markus purchased. It is the very definition of scaling up -- a seriously big iron single server.

But what if you scaled out, instead -- Hadoop or MapReduce style, across lots and lots of inexpensive servers? After some initial configuration bumps, I've been happy with the inexpensive Lenovo ThinkServer RS110 servers we use. They're no match for that DL785 -- but they aren't exactly chopped liver, either:

Lenovo ThinkServer RS110 barebones $600
8 GB RAM $100
2 x eBay drive brackets $50
2 x 500 GB SATA hard drives, mirrored $100
Intel Xeon X3360 2.83 GHz quad-core CPU $300

Grand total of $1,150 per server. Plus another 10 percent for tax, shipping, and so forth. I replace the bundled CPU and memory that the server ships with, and then resell the salvaged parts on eBay for about $100 -- so let's call the total price per server $1,200.

Now, assuming a fixed spend of $100,000, we could build 83 of those 1U servers. Let's compare what we end up with for our money:

Scaling Up Scaling Out
CPUs 32 332
RAM 512 GB 664 GB
Disk 4 TB 40.5 TB

What about the power costs? Electricity and rack space isn't free.

Scaling Up Scaling Out
Peak Watts 1,200w 16,600w
Power Cost / Year $1,577 $21,815
That said, I think it's fair to conclude that scaling out is only frictionless when you use open source software. Otherwise, you're in a bit of a conundrum: scaling up means paying less for licenses and a lot more for hardware, while scaling out means paying less for the hardware, and a whole lot more for licenses.


An interesting parallel from Left 4 Dead 2:

Normal tanks are quite easy to kill with a party of 3-4 players, however when you spawn 3 tanks with reduced HP and attack they become much more dangerous than one fully powered tank. It increases the likelihood of dying by about 6 times, based off of some old stats I have. It's much harder to kite and coordinate your team against multiple targets at once, which is why it's more effective than just increasing the stats on a single tank. Well coordinated teams can reduce the likelihood of dying significantly.

Spawning 3-4 tanks with 20-25% of the regular HP of a tank seems to be the sweet spot here, if you reduce the health too much they become too easy to present a challenge.

It takes some time for the people added to a project to become productive. Brooks calls this the "ramp up" time. Software projects are complex engineering endeavors, and new workers on the project must first become educated about the work that has preceded them; this education requires diverting resources already working on the project, temporarily diminishing their productivity while the new workers are not yet contributing meaningfully. Each new worker also needs to integrate with a team composed of multiple engineers who must educate the new worker in their area of expertise in the code base, day by day. In addition to reducing the contribution of experienced workers (because of the need to train), new workers may even have negative contributions – for example, if they introduce bugs that move the project further from completion.
Communication overheads increase as the number of people increases. The number of different communication channels increases rapidly with the number of people.

Everyone working on the same task needs to keep in sync, so as more people are added they spend more time trying to find out what everyone else is doing.

"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." - Roy Amara ... _headlines
"Any headline which ends in a question mark can be answered by the word no."
The reason why journalists use that style of headline is that they know the story is probably bollocks, and don’t actually have the sources and facts to back it up, but still want to run it.[6]
"The more any quantitative social indicator is use for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby Wombaticus Rex » Mon Mar 04, 2013 8:26 pm

I am fascinated by how much the Scaling Fallacy material discussed above reminds me of what I think Rene Thom was trying to get across to me when I read his Catastrophe Theory material over and over in MTL. I don't think I fully understood it, but what I did take away bears a visceral resemblance to the strategic insight you're fleshing out between the tank example and the visual of the Icarus blueprint as a failure triggered by, like...scale illiteracy. Context blindness, maybe.
User avatar
Wombaticus Rex
Posts: 10608
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Mon Mar 04, 2013 9:55 pm

Wombaticus Rex wrote:I am fascinated by how much the Scaling Fallacy material discussed above reminds me of what I think Rene Thom was trying to get across to me when I read his Catastrophe Theory material over and over in MTL. I don't think I fully understood it, but what I did take away bears a visceral resemblance to the strategic insight you're fleshing out between the tank example and the visual of the Icarus blueprint as a failure triggered by, like...scale illiteracy. Context blindness, maybe.

I've never heard of that before, it wasn't on the complexity science map:

One of the joys of simulations is that you get a familiar metaphor to work from. It also gives you an excuse to play video games to see how rules can be tweaked slightly to reorder behavior.
Little by little the two points of intersection approach each other. At the moment of tangency they smash head on. After impact, the line no longer crosses the circle at all. It’s as if the intersections annihilated each other, like a particle and anti-particle.

This is the “fold catastrophe,” the most basic scenario in catastrophe theory. It’s important because in its aftermath there are no other intersections in sight. Whatever the system is going to do next, it’s going to be something radically different. It has to leap to a different state.

A leap like this occurs in our own bodies — specifically, in how long we tend to sleep after being awake for many hours. Imagine that you’ve just stayed up very late. Maybe you’ve even pulled an all-nighter. When you finally get to bed, will you sleep more than usual, the same, or less? Assume that you can sleep undisturbed until you wake up naturally.

It turns out that the longer you stay up, the less sleep you’ll get afterward. If you haven’t experienced this, it sounds unbelievable, but it’s been confirmed in several experiments and field studies (see the endnotes for references). Every extra hour awake will cost you about 20 or 30 minutes of sleep — assuming you go to bed before noon the next day. The leap occurs if you stay up even longer. Suppose you don’t get to bed till around 3 the following afternoon, having been awake for something like 32 straight hours. How long do you think you’ll sleep now? A 1996 study found that half the subjects woke up after only a lousy little three-hour nap, while the other half corked off for 11 hours!

Let me repeat this since it’s so strange. If you’ve been awake long enough and you fall asleep when your body thinks it’s siesta time, the amount of sleep you’ll get can be either very short or very long. At that time of day, sleep duration jumps abruptly.

To help us understand these results, consider a simple pictorial model. The sloping line represents the rise in your restedness as you accumulate sleep. When restedness reaches a threshold level shown by the wavy curve, it becomes hard to sleep any more and you wake up spontaneously.

It certainly is a form of discrete math.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Mon Mar 04, 2013 10:40 pm

Also, there's no limit to how many courses you can sign up for on coursera. I'm already taking a dozen courses at once. I just log in and feast. The most useful so far has probably been Mung Chiang's Networked Life - 20 Questions & Answers. Pay attention to the backgrounds of the teachers, I've heard of some horribly organized/factual inaccurate courses that got promoted on there.

In a real college taking this many courses would be suicide, you can only get away with it if you focus mainly on theoretical knowledge and skip some of the redundant coursework (with the exception of wonderful courses like Chiang's).

Scott Young pulled off a 4 year MIT Comp-Sci degree in about 12 months by doing it all online. He wrote about Learning Methods previously, so he had a strong background for it.

His book:
As an update, Since the first four classes, I've been proceeding 3-4 classes in parallel at a time. Although this involves reviewing over a much shorter time span than is typical (1 month instead of 1 semester), it increases the benefits of spaced repetition.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Sun Mar 10, 2013 11:43 pm

So far I've been sailing through 9 STEM courses, the others start soon. Scott Young's 4 to 1 is holding up pretty well, takes about 1/4 the time to complete the exams and course work as the time they list. A lot of it I'm already familiar with but I've found gaps here and there that needs to be shored up. Good for leaving no stone left unturned and refreshing everything. The barrier to entry is usually just basic high school level math, except for Chiang's Network 20 course which requires linear algebra and a few other things. Chiang's course is the exception and is qualitatively harder because it poses unsolved questions in electronics engineering as homework. I haven't gotten into any teams to do homework yet so I have no idea what the group cooperative experience is like.

According to the professors themselves the actual completion rate is only 1-3%, maybe 10% in the best cases. This is something many of them mention in lectures, along with steps they're trying to raise the completion rate. The people that do finish tend to get very high scores, near 100%.

So obviously the platform needs a lot of work until it can replace traditional colleges.

I did get an interesting idea: Applying the scaling fallacy to voting behaviors and other personnel preferences to model how "rational" actors aggregate into irrational systems.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Mon Mar 11, 2013 8:01 pm

Heres an example comparing two Natural Language Processing courses, one from Columbia University and the other from Stanford:

There is an extreme difference in their use of depth of processing. Columbia is nearly all black and white slides, the Stanford course at least has the sense to try out more than 2 colors simultaneously and shows the face of the instructor as he explains the material. The Stanford course from what I have seen so far is more practical focused because it can cover state of the art material without getting bogged down in plug and chug mathematical formulas. Both of them require linear algebra and coding, it's just that the Columbia course abstracts even the simplest details out that you have to do extra work to un-abstract it. The detail isn't hard, it's the fact that it was made too general and didn't engage the depth of processing.

10% of the graduating class at Stanford creates a start-up immediately after graduation. Finding exact statistics on Columbia's entrepreneurship has been very difficult as most material talks about being employed by a large company or working up the corporate ladder as an MBA in a well known company like The Red Cross.

Stanford is ranked #2 in CompSci/Technology/Engineering by the Shanghai ranking and Columbia is ranked #49.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Mon Mar 11, 2013 9:39 pm

Taking this a little further, let's compare our professors on Google Scholar and then look at individual works:

When we look at books published, we can see Jurafsky has co-published 3 textbooks, Manning has co-published 2 and authored 2 on his own.

Collins hasn't published any books, but you can see a long history of publishing about algorithms, however his most cited papers were at least 8 years old. Both Stanford professors have a lot more recently cited work and focus on the application of algorithms.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Thu Apr 04, 2013 11:25 pm

штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)


Return to Data & Research Compilations

Who is online

Users browsing this forum: No registered users and 6 guests