Structured Thinking : Analysis, Exploration, Exploitation

Moderators: Elvis, DrVolin, Jeff

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Wed May 01, 2013 2:20 pm
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

Something to look out for: old physicists writing on things outside of their field, like AI research, Economics, Electrical Engineering, ect...

Somehow they keep getting through peer review and get a lot of press in pop-sci mags. For some reason older physicist's treat everything as a natural system and never bother to learn domain specific knowledge. ... g/491.html
Regular readers who care about such things — I think there are about three of you — will recall that I have long had a thing about just how unsound many of the claims for the presence of power law distributions in real data are, especially those made by theoretical physicists, who, with some honorable exceptions, learn nothing about data analysis.
1.Lots of distributions give you straight-ish lines on a log-log plot. True, a Gaussian or a Poisson won't, but lots of other things will. Don't even begin to talk to me about log-log plots which you claim are "piecewise linear".

2.Abusing linear regression makes the baby Gauss cry. Fitting a line to your log-log plot by least squares is a bad idea. It generally doesn't even give you a probability distribution, and even if your data do follow a power-law distribution, it gives you a bad estimate of the parameters. You cannot use the error estimates your regression software gives you, because those formulas incorporate assumptions which directly contradict the idea that you are seeing samples from a power law. And no, you cannot claim that because the line "explains" (really, describes) a lot of the variance that you must have a power law, because you can get a very high R^2 from other distributions (that test has no "power"). And this is without getting into the additional errors caused by trying to fit a line to binned histograms.

It's true that fitting lines on log-log graphs is what Pareto did back in the day when he started this whole power-law business, but "the day" was the 1890s. There's a time and a place for being old school; this isn't it.

3.Use maximum likelihood to estimate the scaling exponent. It's fast! The formula is easy! Best of all, it works! The method of maximum likelihood was invented in 1922 [parts 1 and 2], by someone who studied statistical mechanics, no less. The maximum likelihood estimators for the discrete (Zipf/zeta) and continuous (Pareto) power laws were worked out in 1952 and 1957 (respectively). They converge on the correct value of the scaling exponent with probability 1, and they do so efficiently. You can even work out their sampling distribution (it's an inverse gamma) and so get exact confidence intervals. Use the MLEs!

штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Wed May 29, 2013 12:02 pm

Some superstring theories have more than one effective low-energy limit
corresponding to classical spacetimes with different dimensionalities. We argue that all but
the .3C1/-dimensional one might correspond to ‘dead worlds’, devoid of observers, in which
case all such ensemble theories would actually predict that we should find ourselves inhabiting
a .3 C 1/-dimensional spacetime. With more or less than one time dimension, the partial
differential equations of nature would lack the hyperbolicity property that enables observers to
make predictions. In a space with more than three dimensions, there can be no traditional atoms
and perhaps no stable structures. A space with less than three dimensions allows no gravitational
force and may be too simple and barren to contain observers. ... _spacetime
The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested[13] that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena." Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow says that it "...gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa." (Barrow 2002: 204). This is because the law of gravitation (or any other inverse-square law) follows from the concept of flux and the proportional relationship of flux density and the strength of field. If N = 3, then 3-dimensional solid objects have surface areas proportional to the square of their size in any selected spatial dimension. In particular, a sphere of radius r has area of 4πr ². More generally, in a space of N dimensions, the strength of the gravitational attraction between two bodies separated by a distance of r would be inversely proportional to rN−1.

In 1920, Paul Ehrenfest showed that if we fix T = 1 and let N > 3, the orbit of a planet about its sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy.[14] Ehrenfest also showed that if N is even, then the different parts of a wave impulse will travel at different speeds. If N > 3 and odd, then wave impulses become distorted. Only when N = 3 or 1 are both problems avoided. In 1922, Hermann Weyl showed that Maxwell's theory of electromagnetism works only when N = 3 and T = 1, writing that this fact "...not only leads to a deeper understanding of Maxwell's theory, but also of the fact that the world is four dimensional, which has hitherto always been accepted as merely 'accidental,' become intelligible through it."[15] Finally, Tangherlini[16] showed in 1963 that when N > 3, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse.
Hence anthropic and other arguments rule out all cases except N = 3 and T = 1 (or N = 1 and T = 3 in different conventions) — which happens to describe the world about us. Curiously, the cases N = 3 or 4 have the richest and most difficult geometry and topology. There are, for example, geometric statements whose truth or falsity is known for all N except one or both of 3 and 4.[citation needed] N = 3 was the last case of the Poincaré conjecture to be proved.

String theory hypothesizes that matter and energy are composed of tiny vibrating strings of various types, most of which are embedded in dimensions that exist only on a scale no larger than the Planck length. Hence N = 3 and T = 1 do not characterize string theory, which embeds vibrating strings in coordinate grids having 10, or even 26, dimensions.

The Causal dynamical triangulation (CDT) theory is a background independent theory which derives the observed 3+1 spacetime from a minimal set of assumptions, and needs no adjusting factors. It does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves. It shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time, but spacetime becomes 3+1-d in scales significantly larger than Planck. So, CDT may become the first theory which doesn't postulate but really explains observed number of spacetime dimensions.[21]
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Wed May 29, 2013 12:13 pm

From David Owens' Leading Strategic Innovation & Creative People Must Be Stopped:











штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Wed May 29, 2013 12:30 pm

Combining Complexity and Simplicity: ... omplexity/

Imaginary Numbers: ... y-numbers/

& Founder's Dilemma HBR: ... ilemma.pdf


A programming system has two parts. The programming "environment" is the part that's installed on the computer. The programming "language" is the part that's installed in the programmer's head.

This essay presents a set of design principles for an environment and language suitable for learning.

The environment should allow the learner to:

read the vocabulary -- what do these words mean?
follow the flow -- what happens when?
see the state -- what is the computer thinking?
create by reacting -- start somewhere, then sculpt
create by abstracting -- start concrete, then generalize
The language should provide:

identity and metaphor -- how can I relate the computer's world to my own?
decomposition -- how do I break down my thoughts into mind-sized pieces?
recomposition -- how do I glue pieces together?
readability -- what do these words mean?
Learning cooking is not about guessing the functionality of your kitchen appliances. It's about understanding how ingredients can be combined.

Likewise, guessing the third argument of the "ellipse" function isn't "learning programming". It's simply a barrier to learning. In a modern environment, memorizing the minutia of an API should be as relevant as memorizing times tables.

The environment is responsible for making meaning transparent. The environment must enable the reader to effortlessly read the program, to decode the code, so she can concentrate on genuine programming concepts -- how the algorithmic "ingredients" combine.


Modularity is the human mind's lever against complexity. Breaking down a complex thing into understandable chunks is essential for understanding, perhaps the essence of understanding.

A programming language must encourage the programmer to decompose -- to approach a complex problem by breaking it into simpler problems. Papert refers to this as breaking down a program into "mind-size bites".
A programming language must encourage recomposition -- grabbing parts of other programs, assembling them together, modifying them, building on top of them. This gives creators the initial material they need to create by reacting, instead of facing every new idea with a blank page. It also allows creators to learn from each other, instead of deriving techniques and style in a vacuum.!/KillMath

If I had to guess why "math reform" is misinterpreted as "math education reform", I would speculate that school is the only contact that most people have had with math. Like school-physics or school-chemistry, math is seen as a subject that is taught, not a tool that is used. People don't actually use math-beyond-arithmetic in their lives, just like they don't use the inverse-square law or the periodic table.

Which is the premise of this project, of course -- people don't use math. But everyone seems to believe, if only math were taught better, they would use it! And my position (and the entire point of the project) is: No. Teach the current mathematical notation and methods any way you want -- they will still be unusable. They are unusable in the same way that any bad user interface is unusable -- they don't show the user what he needs to see, they don't match how the user wants to think, they don't show the user what actions he can take.

They are unusable in the same way that the UNIX command line is unusable for the vast majority of people. There have been many proposals for how the general public can make more powerful use of computers, but nobody is suggesting we should teach everyone to use the command line. The good proposals are the opposite of that -- design better interfaces, more accessible applications, higher-level abstractions. Represent things visually and tangibly.

And so it should be with math. Mathematics, as currently practiced, is a command line. We need a better interface.

Oliver Steele: email

Anything that remains abstract (in the sense of not concrete) is hard to think about... I think that mathematicians are those who succeed in figuring out how to think concretely about things that are abstract, so that they aren't abstract anymore. And I believe that mathematical thinking encompasses the skill of learning to think of an abstract thing concretely, often using multiple representations – this is part of how to think about more things as "things". So rather than avoiding abstraction, I think it's important to absorb it, and concretize the abstract... One way to concretize something abstract might be to show an instance of it alongside something that is already concrete.


Alan Kay: Doing With Images Makes Symbols
Jacques Hadamard, the famous French mathematician, in the late stages of his life, decided to poll his 99 buddies, who made up together the 100 great mathematicians and physicists on the earth, and he asked them, "How do you do your thing?" They were all personal friends of his, so they wrote back depositions. Only a few, out of the hundred, claimed to use mathematical symbology at all. Quite a surprise. All of them said they did it mostly in imagery or figurative terms. An amazing 30% or so, including Einstein, were down here in the mudpies [doing]. Einstein's deposition said, "I have sensations of a kinesthetic or muscular type." Einstein could feel the abstract spaces he was dealing with, in the muscles of his arms and his fingers...

The sad part of [the doing -> images -> symbols] diagram is that every child in the United States is taught math and physics through this [symbolic] channel. The channel that almost no adult creative mathematician or physicist uses to do it... They use this channel to communicate, but not to do their thing. Much of our education is founded on those principles, that just because we can talk about something, there is a naive belief that we can teach through talking and listening.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Sun Jun 09, 2013 5:50 pm

One interesting idea, paraphrasing the above, is that as online education gets more refined it will get better at teaching students and therefore reward hard work more than heritable IQ. Layers of abstraction can be peeled away, though this will likely anger much of academia, to create more broadly usable tools. Current pass rates hover at around 4-17%, with some exceptions getting closer to 25% (most online education users are dabbling to see what is in different courses). The vast majority of users simply don't follow through with the material or watch all of the lecture videos in a given course. Either way open or semi-open access to the material is a boon to the selection of scientists who followed through with their education while largely ignoring or while acting in spite of the current academic establishment.

The other benefit of being able to take a whole range of courses in a time efficient manner that is not brought up as widely is cross-discipline pollination. Richard Hamming, Terry Tao and a whole host of other people have already stressed the importance of being adept in multiple fields: ... nt-places/
It is a very good idea to do your graduate study at a different institution as your undergraduate study, and to take a postdoctoral position at a different place from where you did your graduate study.Even the best mathematics departments do not have strengths in every field, so being at several mathematics departments will broaden your education and expose you to a variety of mathematical cultures, including interesting tools and parts of mathematics outside of your existing fields of expertise. Furthermore, you will be able to interact over time with a greater number of mathematicians in your area if you study and work at different places, than if you stay at a single institution; given that a significant portion of one’s career advancement in a field is based in part on the recognition you and your work receive from your peers in that field, this can thus be quite beneficial to your future career in a mathematical area.

Furthermore, the act of moving will help you make the (substantial) psychological transition from an undergraduate student to a graduate student (in which you have to go beyond rigour and proofs), or from a graduate student to a postdoctoral researcher (in which one has to take the initiative rather than rely purely on your advisor).
I begin with the choice of problem. Most scientists spend almost all of their time working on problems that even they admit are neither great or are likely to lead to great work; hence, almost surely, they will not do important work. Note that importance of the results of a solution does not make the problem important. In all the 30 years I spent at Bell Telephone Laboratories (before it was broken up) no one to my knowledge worked on time travel, teleportation, or anti-gravity. Why? Because they had no attack on the problem. Thus an important aspect of any problem is that you have a good attack, a good starting place, some reasonable idea of how to begin.

To illustrate, consider my experience at BTL. For the first few years I ate lunch with he mathematicians. I soon found that they were more interested in fun and games than in serious work, so I shifted to eating with the physics table. There I stayed for a number of years until the Nobel Prize, promotions, and offers from other companies, removed most of the interesting people. So I shifted to the corresponding chemistry table where I had a friend.

At first I asked what were the important problems in chemistry, then what important problems they were working on, or problems that might lead to important results. One day I asked, "if what they were working on was not important, and was not likely to lead to important things, they why were they working on them?" After that I had to eat with the engineers!

About four months later, my friend stopped me in the hall and remarked that my question had bothered him. He had spent the summer thinking about the important problems in his area, and while had had not changed his research he thought it was well worth the effort. I thanked him and kept walking. A few weeks later I noticed that he was made head of the department. Many years later he became a member of the National Academy of Engineering. The one person who could hear the question went on to do important things and all the others -- so far as I know -- did not do anything worth public attention.

On hard work and online education (note that Terry places much more emphasis on methodology and hard work versus gwern): ... 0education
Hardly any change ever preserves all relative positions or ratios - someone benefits disproportionately, someone benefits only a little.
So what differentials can we expect from online education? Hoary articles from the 90s about thedigital divide’ might make one predict that it will benefit middle and upper-class whites; but on the other hand, proponents love to talk about favored minorities (eg. a foreign black female - that is, a girl in an African village) who can now access online education through cheap cellphones, so one might predict that online education will instead level the playing field. No longer will there be a big gap between receiving essentially no education and receiving a real education, a gap that perpetuates cycles of poverty. As Internet access becomes more common than access to quality schools, quality school delivered through the Internet will lead to an equalizing effect (the elites will be no better off than before, and the non-elites now have the chance to obtain a prerequisite to becoming an elite).

Success factors

It may help to ask what causes success in education and see how online education affects it. To a first approximation, ignoring environment, one earns educational success through:
IQ obviously predicts a huge chunk of educational success (leading to the ironic accusation that IQ tests are only academic questions) since the smarter one is, the easier learning anything is, much less one’s schoolwork.

Conscientiousness (a personality trait in the Big Five; of the hard work, grit, effort)
If one is not smart enough that one can simply inhale lessons and pass tests, one still has the option of working hard: doing extra practice problems, asking for help, etc. Success will not come easy, but it will still come. These 2 factors together will correlate somewhere like 0.7 with educational success: someone who is smart and hard-working will go to the top, and someone who is stupid and lazy will not.

The rest of the correlation is made up of socioeconomic status, culture (eg. East Asian?) and random other things: random life events or hard-to-measure environmental factors like an extra-inspiring teacher, etc.
IQ seems like it could go any way:
Any effects could roughly cancel out, perhaps in some sort of compensating mechanism where students only aim at particular levels of mastery or performance and better or worse methods only change how much time they need to invest before they go off to play video games.

It could increase the need for IQ, because now all the extraneous time-wasting gunk like sharpening pencils or doing roll-call can be cleared away by the technical solutions, leaving more time for pure learning. By eliminating all the environmental hindrances and variation, the only variation left will come from the student’s innate intellectual abilities: IQ. Students will race through courses until they hit their natural limits; even Sal Khan’s videos can’t make a dim bulb calculate solutions to Schrodinger’s equation.

It has been noted in the psychometric literature that successful attempts to eliminate socio-economic penalties and provide quality environments for all children would necessarily increase the apparent contribution of heredity: if every child is in an environment that lets them develop and flourish to their fullest extent, then any remaining differences in their development will be due to hereditary factors! If variations in IQ are the joint product of variations in heredity and environment, then eliminating all variation in environment, setting environment to 0, means the remaining variation will be just the variation in heredity.

It could reduce the need for IQ, since online education will lead to a marketplace of lessons where only the clearest, most insightful, easily understood lessons survive. In ordinary classrooms staffed by ordinary teachers, extemporaneous lectures or explanations are necessarily more opaque and lower-quality compared to a lecture that the world-class presenter has spent months or years honing.
But it is a utopian thought that perhaps everyone will be successful at education; so the question becomes, what trait or environmental factor would then become the best predictor of attainment? If you reduce the need for brains, then perhaps you still need motivation and appetite for work, which in conjunction with the previous point about joint products leads us to the next observations…

Conscientiousness is the joker. There is one clear possible change: online education will increase demand for Conscientiousness compared to offline education.
This has been suggested on more than one occasion1. This tallies with my personal experience with online courses and classes with online assignment components like computer science classes (where class attendance may be optional and programming projects or homework are submitted remotely). I had a good deal of trouble just sitting down to do the course or assignment, even though it was not necessarily that difficult. The distractions on my laptop beckoned: I would go use crufty old Solaris boxes in the computer labs just to avoid the distractions and get something done. Other experiences were more dramatic: one CS exam was done on computers, with a built-in test suite you could run to get your exact grade, so one could spend hours working on it until one had a perfect 100 (which wasn’t terribly hard), which of course I did - so I was shocked when the teacher showed us the grade distribution and it looked like a normal CS exam distribution, with plenty of <100 scores and outright failures!
Miscellaneous is too varied and heterogeneous to be predictable, so we won’t discuss it further.

Think of how this will penalize bright creative potential-future-great-scientists - but uninterested in forcing themselves to do mandated drudge-work - nerds. We have all heard stories of geniuses like Einstein or Darwin or Jung who despised lower or higher education, or did their best to ignore it while educating themselves - Simonton’s 1994 Greatness: Who makes history and why estimates that this is not a few anecdotes but 60% of his sample. (Conscientiousness is necessary for scientific greatness, but not that much.) ... school.pdf
The historical psychologist Simonton (1994), in his study on geniuses
of the western civilisation, concluded that 60% of the great men of the 20th century simply hated their
school years.

Einstein's grades throughout school were generally high, particularly in math related subjects, however he felt the experience damaged his intrinsic motivation. Unfortunately, the statistical studies can't directly measure whether those scientists were driven by intrinsic or extrinsic motivation, but the evidence suggests the former is much more important than the later for long term success (it's somewhat loaded, as pursuing a line of inquiry in spite of institutional interference can said to be caused by both intellect (g) and intrinsic motivation). ... education/
Secondly, I feel that it is important to enjoy one’s work; this is what sustains and drives a person throughout the duration of his or her career, and holds burnout at bay. It would be a tragedy if a well-meaning parent, by pushing too hard (or too little) for the development of their child’s gifts in a subject, ended up accidentally extinguishing the child’s love for that subject. The pace of the child’s education should be driven more by the eagerness of the child than the eagerness of the parent.

Thirdly, one should praise one’s children for their efforts and achievements (which they can control), and not for their innate talents (which they cannot). This article by Po Bronson describes this point excellently. See also the Scientific American article “The secret to raising smart kids” for a similar viewpoint. ... smart-kids
Our society worships talent, and many people assume that possessing superior intelligence or ability—along with confidence in that ability—is a recipe for success. In fact, however, more than 30 years of scientific investigation suggests that an overemphasis on intellect or talent leaves people vulnerable to failure, fearful of challenges and unwilling to remedy their shortcomings.
Numerous studies over the past thirty years have shown that when people of any age and any ability level are faced with mathematical challenges that arise naturally in a real-world context that has meaning for them, and where the outcome directly matters to them, they rapidly achieve a high level of competence. How high? Typically 98 percent, that's how high. I describe some of those studies in my book The Math Gene (Basic Books, 2000). I also provide an explanation of why those same people, when presented with the very same mathematical challenges in a traditional paper-and-pencil classroom fashion, perform at a lowly 37 percent level.

The evidence is clear. It's not that people cannot think mathematically. It's that they have enormous trouble doing it in a de-contextualized, abstract setting.

So why the continued focus on skills? Because many people, even those in positions of power and influence, not only are totally unaware of the findings I just mentioned, they don't even understand what mathematics is and how it works. All they see are the skills, and they think, wrongly, that is what mathematics is about. Given that for most people, their last close encounter with mathematics was a skills-based school math class, it is not hard to see how this misconception arises. But to confuse mathematics with mastery of skills is the same as thinking architecture is about bricklaying, or confusing music with mastering the musical scale.

For more see Cross-Cultural Analysis of Students with Exceptional Talent in Mathematical Problem Solving:

Realistically, this applies to undergrad and some grad level material. Once you get into higher level material teamwork becomes a much more important factor and without a solid social network to draw discuss ideas and papers with in an informal fashion, online students will be at a much stronger disadvantage compared to in-person learning groups. The PhD isn't going to be replaced anytime soon.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Tue Jun 11, 2013 12:13 pm

Intuition, Mathematics and Theory:

I'll post all of this first piece as it requires registration to view: ... ers/873950
You can answer many seemingly difficult questions quickly. But you are not very impressed by what can look like magic, because you know the trick. The trick is that your brain can quickly decide if question is answerable by one of a few powerful general purpose "machines" (e.g., continuity arguments, the correspondences between geometric and algebraic objects, linear algebra, ways to reduce the infinite to the finite through various forms of compactness) combined with specific facts you have learned about your area. The number of fundamental ideas and techniques that people use to solve problems is, perhaps surprisingly, pretty small -- see for a partial list, maintained by Timothy Gowers.

You are often confident that something is true long before you have an airtight proof for it (this happens especially often in geometry). The main reason is that you have a large catalogue of connections between concepts, and you can quickly intuit that if X were to be false, that would create tensions with other things you know to be true, so you are inclined to believe X is probably true to maintain the harmony of the conceptual space. It's not so much that you can imagine the situation perfectly, but you can quickly imagine many other things that are logically connected to it.

You are comfortable with feeling like you have no deep understanding of the problem you are studying. Indeed, when you do have a deep understanding, you have solved the problem and it is time to do something else. This makes the total time you spend in life reveling in your mastery of something quite brief. One of the main skills of research scientists of any type is knowing how to work comfortably and productively in a state of confusion. More on this in the next few bullets.

Your intuitive thinking about a problem is productive and usefully structured, wasting little time on being aimlessly puzzled. For example, when answering a question about a high-dimensional space (e.g., whether a certain kind of rotation of a five-dimensional object has a "fixed point" which does not move during the rotation), you do not spend much time straining to visualize those things that do not have obvious analogues in two and three dimensions. (Violating this principle is a huge source of frustration for beginning maths students who don't know that they shouldn't be straining to visualize things for which they don't seem to have the visualizing machinery.) Instead...

When trying to understand a new thing, you automatically focus on very simple examples that are easy to think about, and then you leverage intuition about the examples into more impressive insights. For example, you might imagine two- and three-dimensional rotations that are analogous to the one you really care about, and think about whether they clearly do or don't have the desired property. Then you think about what was important to the examples and try to distill those ideas into symbols. Often, you see that the key idea in the symbolic manipulations doesn't depend on anything about two or three dimensions, and you know how to answer your hard question.

As you get more mathematically advanced, the examples you consider easy are actually complex insights built up from many easier examples; the "simple case" you think about now took you two years to become comfortable with. But at any given stage, you do not strain to obtain a magical illumination about something intractable; you work to reduce it to the things that feel friendly.

To me, the biggest misconception that non-mathematicians have about how mathematicians work is that there is some mysterious mental faculty that is used to crack a research problem all at once. It's true that sometimes you can solve a problem by pattern-matching, where you see the standard tool that will work; the first bullet above is about that phenomenon. This is nice, but not fundamentally more impressive than other confluences of memory and intuition that occur in normal life, as when you remember a trick to use for hanging a picture frame or notice that you once saw a painting of the street you're now looking at.

In any case, by the time a problem gets to be a research problem, it's almost guaranteed that simple pattern matching won't finish it. So in one's professional work, the process is piecemeal: you think a few moves ahead, trying out possible attacks from your arsenal on simple examples relating to the problem, trying to establish partial results, or looking to make analogies with other ideas you understand. This is the same way that you solve difficult problems in your first real maths courses in university and in competitions. What happens as you get more advanced is simply that the arsenal grows larger, the thinking gets somewhat faster due to practice, and you have more examples to try. Sometimes, during this process, a sudden insight comes, but it would not be possible without the painstaking groundwork [ ... -do-maths/].

Indeed, most of the bullet points here summarize feelings familiar to many serious students of mathematics who are in the middle of their undergraduate careers; as you learn more mathematics, these experiences apply to "bigger" things but have the same fundamental flavor.

You go up in abstraction, "higher and higher". The main object of study yesterday becomes just an example or a tiny part of what you are considering today. For example, in calculus classes you think about functions or curves. In functional analysis or algebraic geometry, you think of spaces whose points are functions or curves -- that is, you "zoom out" so that every function is just a point in a space, surrounded by many other "nearby" functions. Using this kind of zooming out technique, you can say very complex things in short sentences -- things that, if unpacked and said at the zoomed-in level, would take up pages. Abstracting and compressing in this way makes it possible to consider extremely complicated issues with one's (very) limited memory and processing power.

The particularly "abstract" or "technical" parts of many other subjects seem quite accessible because they boil down to maths you already know. You generally feel confident about your ability to learn most quantitative ideas and techniques. A theoretical physicist friend likes to say, only partly in jest, that there should be books titled "______ for Mathematicians", where _____ is something generally believed to be difficult (quantum chemistry, general relativity, securities pricing, formal epistemology). Those books would be short and pithy, because many key concepts in those subjects are ones that mathematicians are well equipped to understand. Often, those parts can be explained more briefly and elegantly than they usually are if the explanation can assume a knowledge of maths and a facility with abstraction.

Learning the domain-specific elements of a different field can still be hard -- for instance, physical intuition and economic intuition seem to rely on tricks of the brain that are not learned through mathematical training alone. But the quantitative and logical techniques you sharpen as a mathematician allow you to take many shortcuts that make learning other fields easier, as long as you are willing to be humble and modify those mathematical habits that are not useful in the new field.
You move easily among multiple seemingly very different ways of representing a problem. For example, most problems and concepts have more algebraic representations (closer in spirit to an algorithm) and more geometric ones (closer in spirit to a picture). You go back and forth between them naturally, using whichever one is more helpful at the moment.

Indeed, some of the most powerful ideas in mathematics (e.g., duality, Galois theory, algebraic geometry) provide "dictionaries" for moving between "worlds" in ways that, ex ante, are very surprising. For example, Galois theory allows us to use our understanding of symmetries of shapes (e.g., rigid motions of an octagon) to understand why you can solve any fourth-degree polynomial equation in closed form, but not any fifth-degree polynomial equation. Once you know these threads between different parts of the universe, you can use them like wormholes to extricate yourself from a place where you would otherwise be stuck. The next two bullets expand on this.

Spoiled by the power of your best tools, you tend to shy away from messy calculations or long, case-by-case arguments unless they are absolutely unavoidable. Mathematicians develop a powerful attachment to elegance and depth, which are in tension with, if not directly opposed to, mechanical calculation. Mathematicians will often spend days figuring out why a result follows easily from some very deep and general pattern that is already well-understood, rather than from a string of calculations. Indeed, you tend to choose problems motivated by how likely it is that there will be some "clean" insight in them, as opposed to a detailed but ultimately unenlightening proof by exhaustively enumerating a bunch of possibilities. (Nevertheless, detailed calculation of an example is often a crucial part of beginning to see what is really going on in a problem; and, depending on the field, some calculation often plays an essential role even in the best proof of a result.)

In A Mathematician's Apology ['s%20Apology.pdf, the most poetic book I know on what it is "like" to be a mathematician], G.H. Hardy wrote:

"In both [these example] theorems (and in the theorems, of course, I include the proofs) there is a very high degree of unexpectedness, combined with inevitability and economy. The arguments take so odd and surprising a form; the weapons used seem so childishly simple when compared with the far-reaching results; but there is no escape from the conclusions. There are no complications of detail—one line of attack is enough in each case; and this is true too of the proofs of many much more difficult theorems, the full appreciation of which demands quite a high degree of technical proficiency. We do not want many ‘variations’ in the proof of a mathematical theorem: ‘enumeration of cases’, indeed, is one of the duller forms of mathematical argument. A mathematical proof should resemble a simple and clear-cut constellation, not a scattered cluster in the Milky Way."


"[A solution to a difficult chess problem] is quite genuine mathematics, and has its merits; but it is just that ‘proof by enumeration of cases’ (and of cases which do not, at bottom, differ at all profoundly) which a real mathematician tends to despise."

You develop a strong aesthetic preference for powerful and general ideas that connect hundreds of difficult questions, as opposed to resolutions of particular puzzles. Mathematicians don't really care about "the answer" to any particular question; even the most sought-after theorems, likeFermat's Last Theorem, are only tantalizing because their difficulty tells us that we have to develop very good tools and understand very new things to have a shot at proving them. It is what we get in the process, and not the answer per se, that is the valuable thing. The accomplishment a mathematician seeks is finding a new dictionary or wormhole between different parts of the conceptual universe. As a result, many mathematicians do not focus on deriving the practical or computational implications of their studies (which can be a drawback of the hyper-abstract approach!); instead, they simply want to find the most powerful and general connections. Timothy Gowers has some interesting comments on this issue, and disagreements within the mathematical community about it [ ].

Understanding something abstract or proving that something is true becomes a task a lot like building something. You think: "First I will lay this foundation, then I will build this framework using these familiar pieces, but leave the walls to fill in later, then I will test the beams..." All these steps have mathematical analogues, and structuring things in a modular way allows you to spend several days thinking about something you do not understand without feeling lost or frustrated. (I should say, "without feeling unbearably lost and frustrated"; some amount of these feelings is inevitable, but the key is to reduce them to a tolearable degree.)

Andrew Wiles, who proved Fermat's Last Theorem, used an "exploring" metaphor:
"Perhaps I can best describe my experience of doing mathematics in terms of a journey through a dark unexplored mansion. You enter the first room of the mansion and it's completely dark. You stumble around bumping into the furniture, but gradually you learn where each piece of furniture is. Finally, after six months or so, you find the light switch, you turn it on, and suddenly it's all illuminated. You can see exactly where you were. Then you move into the next room and spend another six months in the dark. So each of these breakthroughs, while sometimes they're momentary, sometimes over a period of a day or two, they are the culmination of—and couldn't exist without—the many months of stumbling around in the dark that proceed them." [ ]

In listening to a seminar or while reading a paper, you don't get stuck as much as you used to in youth because you are good at modularizing a conceptual space, taking certain calculations or arguments you don't understand as "black boxes", and considering their implications anyway. You can sometimes make statements you know are true and have good intuition for, without understanding all the details. You can often detect where the delicate or interesting part of something is based on only a very high-level explanation. (I first saw these phenomena highlighted by Ravi Vakil, who offers insightful advice on being a mathematics student: .)

You are good at generating your own definitions and your own questions in thinking about some new kind of abstraction.
One of the things one learns fairly late in a typical mathematical education (often only at the stage of starting to do research) is how to make good, useful definitions. Something I've reliably heard from people who know parts of mathematics well but never went on to be professional mathematicians (i.e., write articles about new mathematics for a living) is that they were good at proving difficult propositions that were stated in a textbook exercise, but would be lost if presented with a mathematical structure and asked to find and prove some interesting facts about it. Concretely, the ability to do this amounts to being good at making definitions and, using the newly defined concepts, formulating precise results that other mathematicians find intriguing or enlightening.

This kind of challenge is like being given a world and asked to find events in it that come together to form a good detective story. You have to figure out who the characters should be (the concepts and objects you define) and what the interesting mystery might be. To do these things, you use analogies with other detective stories (mathematical theories) that you know and a taste for what is surprising or deep. How this process works is perhaps the most difficult aspect of mathematical work to describe precisely but also the thing that I would guess is the strongest thing that mathematicians have in common.

You are easily annoyed by imprecision in talking about the quantitative or logical. This is mostly because you are trained to quickly think about counterexamples that make an imprecise claim seem obviously false.

On the other hand, you are very comfortable with intentional imprecision or "hand-waving" in areas you know, because you know how to fill in the details. Terence Tao is very eloquent about this here []:

"[After learning to think rigorously, comes the] 'post-rigorous' stage, in which one has grown comfortable with all the rigorous foundations of one’s chosen field, and is now ready to revisit and refine one’s pre-rigorous intuition on the subject, but this time with the intuition solidly buttressed by rigorous theory. (For instance, in this stage one would be able to quickly and accurately perform computations in vector calculus by using analogies with scalar calculus, or informal and semi-rigorous use of infinitesimals, big-O notation, and so forth, and be able to convert all such calculations into a rigorous argument whenever required.) The emphasis is now on applications, intuition, and the 'big picture'. This stage usually occupies the late graduate years and beyond."

In particular, an idea that took hours to understand correctly the first time ("for any arbitrarily small epsilon I can find a small delta so that this statement is true") becomes such a basic element of your later thinking that you don't give it conscious thought.

Before wrapping up, it is worth mentioning that mathematicians are not immune to the limitations faced by most others. They are not typically intellectual superheroes. For instance, they often become resistant to new ideas and uncomfortable with ways of thinking (even about mathematics) that are not their own. They can be defensive about intellectual turf, dismissive of others, or petty in their disputes. Above, I have tried to summarize how the mathematical way of thinking feels and works at its best, without focusing on personality flaws of mathematicians or on the politics of various mathematical fields. These issues are worthy of their own long answers!

You are humble about your knowledge because you are aware of how weak maths is, and you are comfortable with the fact that you can say nothing intelligent about most problems. There are only very few mathematical questions to which we have reasonably insightful answers. There are even fewer questions, obviously, to which any given mathematician can give a good answer. After two or three years of a standard university curriculum, a good maths undergraduate can effortlessly write down hundreds of mathematical questions to which the very best mathematicians could not venture even a tentative answer. (The theoretical computer scientist Richard Lipton lists some examples of potentially "deep" ignorance here: ... rassments/) This makes it more comfortable to be stumped by most problems; a sense that you know roughly what questions are tractable and which are currently far beyond our abilities is humbling, but also frees you from being very intimidated, because you do know you are familiar with the most powerful apparatus we have for dealing with these kinds of problems. ... erspheres/
In my last post on higher dimensions, I alluded to the fact that I don’t agree completely with certain notions about higher dimensions. Specifically, I disagree with the idea that the intuition that you take for granted in low dimensions is necessarily ill-equipped to serve you in higher dimensions. Low-dimensional intuition is ill-equipped for many problems, and like most other topics in math, it’s usually most sensible to do the calculations anyway.

Hyperspheres often get brought up with the subject of weirdness in higher dimensions, mostly because they’re easy to understand, and it’s easy to demonstrate the weirdness very quickly. But are they completely weird? Are the examples really fair, or are hyperspheres getting a bad rap?

So what’s my point? This is all sounding very counterintuitive. My point is that when you talk about counter-intuition in higher dimensions, it’s helpful to talk about what’s actually going on, instead of maligning poor innocent constructs like the hypersphere. What’s actually going on? More about that later.

But for now, consider this: no matter how many dimensions a sphere has, it’s always perfectly round, and perfectly isotropic. That’s intuition that isn’t lost in higher dimensions. ... s-measure/
To recap, I was talking about how the volume of a unit hypersphere measured the normal way (with its radius = 1) approaches zero with increasing dimension. I also mentioned that even though a “unit” hypercube that circumscribes the unit sphere (i.e., a hypercube with inradius = 1) has volume that increases exponentially with the dimension (2d), a hypercube with circumradius = 1 decreases even faster than the volume of the hypersphere. Why is one configuration different than the other?

The answer is that they’re not different. A cube is a cube, no matter how you orient it. If its side is of length s, then its volume is sd. What’s different here is our notion of unit measure. We commonly define a unit of volume as the volume of a hypercube with sides of unit length. In that light, it’s not terribly surprising what we know about the volume of hypercubes. So why can’t we just define the unit hypersphere to have unit volume?

This seems objectionable until you realize that we do this all the time in the real world. What’s a gallon? It has nothing to do with an inch or foot. So why do we worry ourselves over defining volume in terms of one-dimensional units? The metric system doesn’t even adhere to this standard. A liter is a cubic decimeter. Why? It just worked out that way. Since these units are all just arbitrary, we could just declare that unit volume is the volume of a unit hypersphere. Or not. So a hypersphere’s volume really isn’t that weird. What seems weird is the discrepancy between the geometries of the hypercube and hypersphere.
Intuition is a conceptual bird's-eye view that allows humans to draw inferences from high-level abstractions without having to systematically trace out each step. Intuition is a wormhole. Intuition allows us get from here to there given limited computational resources.

In mathematics, intuition is generally not used as evidence to support a conclusion, but instead as a tool with which to search for a rigorous way to solve a problem. First of all, this makes intuition a lot less dangerous. If a voter's intuition tells him that some particular economic policy will be beneficial, then he is likely to rely on his intuition being right, and can harm public policy if he is wrong. If a mathematician's intuition tells him that a certain way of attacking a problem is likely to be fruitful, he will fail to solve the problem if he is wrong. But if the mathematician intuitively feels that premise P is true, and he can use it to prove theorem T, he will not state T as fact. Instead, he will state that P implies T, and mention that he finds this especially interesting because he believes P to be true. Secondly, this makes mathematical intuition trainable. Although our brains are not optimized for math, they are extremely adaptable. When a mathematician tries a fruitless path towards solving a problem as a result of bad intuition, he will notice that he has failed to solve it, update his intuitions accordingly, and try a different way. Similarly, he will notice when his intuition helps him solve a problem, and he'll figure out what his intuition did right. ... -metaphor/
My favorite analogies explain a thought and help you explore deeper truths. Here’s a metaphor that captures my stance on learning:

Rote details are arrows, intuition is the bow.
Details (arrows) do the actual work, but are cheap & plentiful
Intuition (the bow) is the framework that makes the details effective (in theory, it’s optional; in practice, it’s not)
Hunting (effective problem solving) is the ultimate goal: how can the entire system help us?
In this clear and engaging analysis, Devlin sets forth a thesis quite contrary to common experience: that reasoning about relationships between mathematical objects is no different from reasoning about human relationships between people, and that this similarity in reasoning explains the origin of mathematics.
For we will see that in our modern contexts the ancient definitions of intellectus have been introduced into the modern definitions of intuition --- while the ancient definitions of intuitus have been lost altogether.

If your head is spinning about now, don't worry. All that has happened is that rationalists have constructed our modern definitions. And rationalists don't like feelings. This will become clear when we compare the modern definitions of intellect and intuition below.
INTELLECT (the modern definition derived from the Latin "intellectus"): the power of cognition or knowing as distinguished from the power to feel or will; the capacity for knowledge.

INTELLECTUS (the ancient definition): to perceive and discern felt relationships.

INTUITION (the modern definition allegedly derived from the Latin "intuitus"): immediate apprehension, cognition, or knowledge.

INTUITUS (the ancient definition) to look at and contemplate felt relationships.

Now, to increase our clarity here, let's talk our way through these four definitions.
First of all, we can immediately see that the concept of "felt relationships" has DISAPPEARED in both of the modern definitions of intellect and intuition. In their modern contexts, then, by definition neither intellect nor intuition deal with felt relationships.

Second, we can see that "felt relationships" linked intuitus and intellectus together, but in two different ways. Intuitus looked at and contemplated felt relationships, while intellectus perceived and discerned among them. In a certain sense, then, intellectus and intuitus were two sides of the same coin, or twin brother and sister.

Third, to now point out an oddity, in our modern English exists no conceptual term that has to do with dealing with "felt relationships."

Fourth, we can see that intuitus/intellectus represented an interconnecting biomind holism by the definitional link of "felt relationships." We can also see that the modern definitions of intellect and intuition break them apart, and that neither deal with "felt relationships."

Fifth, to now point out an oddity, in our modern English exists no conceptual term that has to do with dealing with "felt relationships" --- except one which is seldom used and is often considered obsolete: SENSORIUM.

Sixth, the modern definitions of intuition DO NOT fit with the chief phenomenology of intuition -- - which is ALWAYS expressed as feelings by those who experience some kind of intuition. The modern definitions do not even include the word "feeling."

Seventh, we can now perceive, opaquely so at least, that intuition is composed of sensed feelings which result in the hunches of intellect, for a hunch is a cognitive analysis derived from what has been felt ... pp1_18.pdf

We argue that the individual interacts with a mathematical visualization in a way which is better or worse depending on previous knowledge and on the context. This interaction is important and may even be necessary; the meaning of the visualization is not independent of the observer. A concrete example is when a teacher illustrates a circle by drawing it on the blackboard, as in Figure 9 below

The picture on the blackboard is not a circle, since it is impossible to draw a perfect circle. But for a person who knows that a circle is a set of points in the plane that are equidistant from the midpoint, the picture on the blackboard is sufficient to understand that the teacher is talking about a “mathematical” circle. However, for a child who has never heard of a circle before, the figure of the blackboard probably means something else. By looking at the circle in Figure 9 the child may even think that a circle is a ring which is not connected at the top. The point is that visualizations can certainly be sufficient for convincing oneself of the truth of a statement in mathematics, provided that one has sufficient knowledge of what they represent. ... uition.pdf
Before we define more formally what we mean by intuition, in general, and symbolic intuition, in particular, we would like the reader to try and solve the following five problems, taken from a typical post-arithmetic mathematics curriculum:

(1) How many lines pass through any given two points in 2-dimensional space?
(2) A fair coin is flipped 10 times. The first 9 flips all come up heads. What is
the probability that the next, 10th toss, will come up heads?
(3) Which set has more members, the set of all rational numbers (numbers that
can be expressed as one integer over another) or the set of all irrational numbers
(non-repeating decimals)?
(4) What is 8% of 142?
(5) Solve for x: “3x + 7 = 19”

Now, try and introspect about your experiences. To what extent did you have intuitions about solving these problems and what were they like? Most high-school students and adults have correct intuition about the first problem (only one line can pass through two points in 2- dimensional space), incorrect intuitions about the second and third problems (erroneously assuming that there is a greater probability that the tenth toss will come up heads, and that all infinities are alike, respectively) and no intuitions about the fourth and fifth problems because computing percentages and solving polynomials can usually be done by employing procedures without understanding of the concepts involved

Classical intuitionists viewed intuition as “a special contact with prime reality, producing a sense of ultimate unity, true beauty, perfect certainty, and blessedness” (Westcott, 1968, p.22). According to this viewpoint, intuition is antithetical to reason. The knowledge gained through intuition cannot be verified, supported, or even understood intellectually. Intuitive knowledge is not practical or applicable. It is considered to be a priori and independent of prior knowledge. Some more modern conceptualizations in psychology and education embrace similar views on intuition. For example, Resnick (1986) views mathematical intuitions as cognitive primitives that can function without formal mathematical analysis. Similarly, Dixon and Moore (1996) define intuitive understanding of a problem as a representation that is distinct from the representation of the formal solution procedure for solving the problem
The traditional approach to developing primary mathematical intuition would involve refining the instruction that accompanies problems two through five, in order to enable students to connect new material with their pre-existing intuitions. An alternative view would be to foster the development of new, secondary intuitions. For example, Fischbein (1975) suggests that a student can develop the intuition that the probability of heads on the 10th flip is 0.5 by a process of experimentation and reflection (actually tossing a coin many times and observing the outcomes). The idea is that secondary intuitions can be distilled from engaging in active exploration and experimentation.

Of course our intuition can be strong or relatively free of "noise" but if the thing we are exploring has a weak or highly noisy "signal" it can still be hard to get a high fidelity understanding of the topic. From a schematic of an analog to digital converter for a radio receiver:
Image ... ing-3.html
In English, the general concept of perception has not changed very much since 1828 -- even though accumulating evidence and knowledge since then has established that the general concept is complete nonsense. For example, to merely observe or receive impressions does not automatically equate with knowledge or understanding.

The pre-1828 actual etymology in English of TO PERCEIVE and PERCEPTION has not been established very well.
The approximate dates of the earliest noted uses in English of these two terms are the thirteenth and fourteenth centuries -- but which seems a bit late for such fundamental conceptualizing nomenclature.
If we carefully inspect the earliest recorded etymological basis, we can find not one, but two major concepts regarding what we today have collapsed into just one, and which is entirely misconceived.

One the one hand, there was a similar archaic term, PERCEANT (apparently derived from Old French PERCER (to pierce)) which meant penetrating, keen, piercing -- the "keen" motif implying some kind of penetrating/piercing/discriminative faculties.
On the other hand, the two terms PERCEANT and PERCER are later thought to have been derived from the Latin PER + CAPERE -, PER meaning "by or through," CAPERE meaning to take -- ending up with "to take by or through" (something?).

However, to "penetrate-pierce keenly" and to "take [in?] by or through" are two entirely different concepts, the first being an active-like out-flowing, the second being a passive-like in-flowing.

Now, the standard definition of A PERCEPTION is an attainment of awareness or understanding, while the most used definition of TO PERCEIVE is to become aware of through the senses.
And so some dreaded complexities arise, whether we like them or not.
Furthermore, awareness of and understanding are really two different things -- for one can be aware of something and not understand it at all, or understand something in the complete absence of awareness about what is being "understood."

But in spite of this essential confusion regarding awareness and understanding, both the active out-going and the passive in-taking formats of perception imply direct routes between the perceiver and what is being perceived.
The active out-going format also implies a "search, discover, and pierce" activity. The passive take-in-through (the senses) format implies a "sit back and receive" activity.
As these two concepts stand, then, a direct link out to or receiving into is implied.
John Flinders Petrie (1907-1972) was the only son of Egyptologist Sir W. M. Flinders Petrie. He was born in 1907 and as a schoolboy showed remarkable promise of mathematical ability. In periods of intense concentration he could answer questions about complicated four-dimensional objects by visualizing them.

He first realized the importance of the regular skew polygons which appear on the surface of regular polyhedra and higher polytopes. He was a lifelong friend of Coxeter, who named these polygons after him.

The idea of Petrie polygons was later extended to semiregular polytopes.


Alexander Grothendieck (German: [ˈgroːtn̩diːk]; French: [gʁɔtɛndik]; born 28 March 1928) is a German-born mathematician and the central figure behind the creation of the modern theory of algebraic geometry. His research program vastly extended the scope of the field, incorporating major elements of commutative algebra, homological algebra, sheaf theory, and category theory into its foundations. This new perspective led to revolutionary advances across many areas of pure mathematics.

Within algebraic geometry itself, his theory of schemes has become the universally accepted language for all further technical work. His generalization of the classical Riemann-Roch theorem launched the study of algebraic and topological K-theory. His construction of new cohomology theories has left deep consequences for algebraic number theory, algebraic topology, and representation theory. His creation of topos theory has had an impact on set theory and logic.

One of his most celebrated achievements is the discovery of the first arithmetic Weil cohomology theory: the ℓ-adic étale cohomology. This key result opened the way for a proof of the Weil conjectures, ultimately completed by his student Pierre Deligne. To this day, ℓ-adic cohomology remains a fundamental tool for number theorists, with important applications to the Langlands program.

Grothendieck’s way of thinking has influenced generations of mathematicians long after his departure from mathematics. His emphasis on the role of universal properties brought category theory into the mainstream as an important organizing principle. His notion of abelian category is now the basic object of study in homological algebra. His conjectural theory of motives has been a driving force behind modern developments in algebraic K-theory, motivic homotopy theory, and motivic integration.

Driven by deep personal and political convictions, Grothendieck left the Institut des Hautes Études Scientifiques, where he had been appointed professor and accomplished his greatest work, after a dispute over military funding in 1970. His mathematical activity essentially ceased after this, and he devoted his energies to political causes. He formally retired in 1988 and within a few years moved to the Pyrenees, where he currently lives in isolation from human society.

The somewhat strange title “The Mutants” (a word that in French too comes from the vocabulary of science fiction) refers to people who differ from “mere mortals” in a spiritual way; in particular, they are ahead of their times. At one place in the text Grothendieck gives the following explanation of this concept (slightly shortened in translation):
There have been in this century (as doubtless in other centuries past) a certain number of isolated men who
seem to my eyes to be “new men”—men who appear to be “mutants” and who already today, in one way or another, prefigure the “man of tomorrow” embodied in the present; the man in the full sense of the word, who undoubtedly will emerge in the generations to come, in the course of the “post-herd” age, of which the dawn is very close and which they tacitly herald.

For hundreds of pages Grothendieck describes and discusses the lives and works of a total of eighteen mutants. It becomes clear that he sees a personal connection between these mutants and himself; for example, he occasionally calls himself their heir, or he calls them his elders. We now give the list of these mutants, as he assembled it himself. No doubt their selection is rather arbitrary. A central (and not very original) theme in Grothendieck’s thinking is the spiritual decline of humanity, necessarily followed by an apocalypse and soon thereafter by the “new age”, the age of freedom and self-determination and of life in harmony with one’s own “soul”. The mutants are people who announce and anticipate this new age.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Wed Jun 19, 2013 2:02 pm

From a former corrections officer: ... amics.html
We were taught (and I have seen again and again over the years) that there are two basic types of groups: Goal-Oriented and Longevity-Oriented.

Goal-Oriented teams exist to accomplish a mission. Your status with the team is based entirely on your contribution to getting the job done. Hard work, intelligence and creativity are valued and rewarded. There is no need for office parties or company picnics*. It is not a social club and when the mission is accomplished, the members drift off. This last is hard for some people to understand- in bad times, a good team can be tighter than family and then, when the bad times are over, go on to separate lives.

Longevity-Oriented groups exist to perpetuate the group. Status is based on rank and service to the group. Hard work and intelligence may be rewarded, but only so long as they don’t make others feel stupid. Creativity almost always threatens the status quo, and is almost always discouraged in a Longevity-Oriented group. Social ritual, whether hazing and initiations** or policy and protocol are the lifeblood of the LO group.

A pure group type is very rare. Even an extreme GO team, unless they are assembled for a single mission, will have to deal with training, logistics and the day-to-day issues of work between missions. Even the most bureaucratic LO team still (I hope) has some kind of job to do, some mission. They will also occasionally have crises that will require at minimum a few mission-oriented thinkers.

Larger organizations will have mixed departments. A GO tactical team will often work for an agency that is primarily run by LO administrations. The GO elements have a job to do, perhaps the primary job of the agency. The LO elements arrange for funding and coordination with other agencies and all of the little social details that are so critical to success and survival in both the government and corporate wilds. Both the GO and LO elements tend to think they are carrying the whole agency.

Something taxpayers should pay attention to: when they identify a problem and through ballot or protest demand a fix, they are expecting a GO response. They want the problem fixed. Due to the vagaries of funding, bureaucracy and labor law the solution will be constructed and run on LO principles. A Longevity-Orientated group is not benefited by accomplishing the mission and ceasing to exist. They are benefited by being able to show incremental progress.

Lastly, there are GO and LO individuals, though LO should actually be changed to Relationship-Oriented when talking about individuals. Generally, they will be happier in the group that matches their personalities. Yet the groups need a mix. LO groups need Goal-Oriented staff to deal with the basic job and to respond to any emergencies that arise. Goal Oriented groups get a lot of benefit from having a few Relationship-Oriented people. No matter how into the mission, a good team still has egos and feeling. I was always careful to have a couple of ROs on my Tactical Team and I never regretted it. ... oring.html
For a day, I was a mentor's mentor. It was fun and strange and silly. The kid is young and smart and sincere. He wants answers and he is doing the right thing, putting himself in the company of people who live like they have them. But it's hard, because like everyone else, he asks a question and wants an answer... and almost every time the answer is "That's not a real question."

What he sees as mastery he thinks of as having the right stuff, having more answers and insights. It's almost exactly the opposite- it's about not having the wrong stuff in your head or your life; about having fewer questions that are more real; clear sight.

Side note re Anthropologists: An anthropologist must shade his work in a somewhat favorable light of whatever culture he is studying and navigate it's taboos. Those that he studies will read his work, and permit or deny him further access to their culture based off of how they feel about his published work. ... ation.html
That aside, most officers don't trust social scientists and that's without having backgrounds in what constitutes science. It is because so much of what they've been told by social scientists is wrong. It fails. In my experience social scientists are very in tune and correct about some things. They know the way that relatively rich, educated, genteel, polite, educated undergraduates think. They know how other extremely educated, privileged academics think. And living in one of the most intellectually inbred worlds possible, they believe they know how all people think. Anthropologists are the exception. Many get their hands dirty.

The things the academics teach-- why they believe that the privileged people they know might turn to crime-- fails for the cops who are dealing with a certain demographic.

Point one: Distrust will arise when concepts comes from bad science.
Point two: Distrust will arise when concepts come from irrelevant science.

When theories are presented as theories, with no touchstone to reality, they don't sit well in your head. I use Maslow, but point out that as a theory there are holes. Big holes. But as a model it is useful. And I tie that model not to my experience but to the experience of everyone in the class.

Point three: Trust is increased with instructor humility. "The theory has holes."
Point four: Trust is increased when you can tie the concept to the student's direct, personal experience.

Though there are theories underlying Conflict Communications, the theories don't matter. The entire course was back engineered. What worked? Why did it work? No idea? Then think it through. What did the things that work have in common? Oh, that reminds me of a theory from college, let me look it up... It worked. It worked before it was ever a system. And it would continue to work as a model even if the underlying theories proved false. The only difference would be not knowing why it worked...and there is no guarantee that we know the real 'why' now. So:

Point five: The appearance of science increases trust, even if it doesn't increase effectiveness or truth. This trust affect is purely psychological
Point six: Something that works increases trust.

I've dealt with the same bad stuff that these recruits will and these officers have. We have a related background. I bring stuff to them from their world, not from an imaginary world of theory and statistical analysis. Someone pointed out recently that psychologists are about 40% accurate in predicting violence. I don't know where he got the statistic or over what time frame or with what data. But I will say that if cops were only 40% accurate the profession would be extinct. The tools I show them are ones I have bet my life on. No academic can say that.

Point seven: Trust increases if the concepts come from the same world and use the same language where the skills must be applied.
Point eight: Trust increases when people you trust rely on the concepts when the stakes are high.

The social scientist was a good, intelligent person who cared about these recruits. But she had the same trust issues accepting data from a thug jail guard than they did accepting information from an academic. And for valid reasons. My insights did not come from her world and might not work in her world. I don't have the symbols of authority and trust in her world that I do in mine. And as much as I've bet my life, I've never bet my tenure on a wild theory and, in her world, tenure seems more real.

Two more points, to make an even ten:
Point nine: Knowing the limits of your concept increases trust. To say something always works brands you as a liar or a fool.
Point ten: Trust is reciprocal. I tell the students that I trust them to do the right thing, to make the decisions, to decide when the model will fail and do something else. People trust people who trust them.

Signs of an adrenaline rush: ... signs.html
There are lots of symptoms of adrenaline-- breathing changes, pulse rate, pupils-- that I don't care about because you can't see them. Signs are distinguished from symptoms in that signs are what you can see.

So common adrenaline signs:
Gross motor activity. Under an adrenaline dump you want to move. Pace. Flex. It seems like as the adrenaline increases both the activity increases (the pacing becomes faster) and seems to concentrate in the big muscle groups-- legs and shoulders.
Clumsiness. Big muscle groups up, small muscle groups down. Shaking, dropping things.
Voice gets higher pitched. Loud is one thing, but I listen for the squeak. Couple of reasons. The funny one is that every team leader so far has had his voice crack the first time he gave the ask-advise-order-check. That reads as nervous to the threat, and we almost always had to fight. Second reason, high pitched voices are one of the signs of fear and fear, like any emotion, is contagious. If one person squeaks or screams, nearby people are more likely to get stupid. Third reason, if the threat hears his own voice break, he may feel compelled to fight to prove that he is not afraid.

Swallowing and licking lips. Or drinking a lot of water if available. Adrenaline burns up a lot of water and makes you very thirsty. Side note: Tardive dyskinesia is one of the side effects of long-term use of psych meds. Street people call it the 'thorazine twitch.' Tardive dyskinesia also involves a lot of lip-licking with darting tongue movements but will also have sharp twitches and (usually) hard blinking.

Rhythmic movement. Almost every person I've seen under an adrenaline dump does something rhythmic. They tap their fingers (especially if they are trying to hide the fear/anger.) Or they bounce on their toes. Some hum. Not usually whistling, the mouth is too dry to whistle.
Color change. Getting red is part of the threat display. These guys don't tend to bother me. They might get stupid and become dangerous, but that's not the sign I'm looking for. When a threat goes pale, things are about to step off. The paleness, of course, comes from peripheral vasoconstriction. the body is trying to make sure that if the saber-toothed tiger gets an arm or a leg you won't bleed too much. Think of sudden pallor as the body clearing the deck for action. Things are imminent.

Danger happens at the intersection of adrenaline and purpose. A drowning man will be adrenalized and have the purpose of breathing, which makes you look like a flotation device. A mugger needs money for drugs and will get his adrenaline into the zone to do the crime.

Some notes, before we go on.
1) Fear, anger and love. I'm a big believer in the James-Lange theory of emotion. The theory states that first there is an event, then there is a hormone dump and THEN you ascribe an emotion to it. They noticed that there's not really a huge difference in the signs and symptoms of intense emotional states. If your mouth is dry and your palms are sweating and your knees are weak and your breathing is rapid and shallow... are you afraid? Or in love?

You get those symptoms when you see a bear, you call it fear. See someone attractive, the exact same symptoms are called 'falling in love.'

So, especially for this subject matter, fear and anger are different labels for the same chemical state. The labels, however, can be powerful motivators. If you call it fear, your instinct may be to curl up in a fetal position. You call it anger and you may fight. There is huge power in consciously labeling. More power, IMO, in NOT labeling and just using the chemicals... but I don't think that's something you can do the first several times. Maybe.

2) Whistling and lighting cigarettes. There are some iconic things in old movies. Lighting a cigarette will show any tremor in your hands, and it is one of the things the heroes and some of the bad guys used to do to show how calm and in control they were. In real life, back when bars allowed smoking, many bouncers practiced so that they could calmly light a cigarette under an adrenaline dump. People subconsciously got it. Calm can be very intimidating in the right circumstances. Same with whistling. I don't suggest whistling around threats, especially mentals, since any high-pitched sound tends to increase adrenaline, but it might help calm you.

Secondary signs.
Most of the adrenaline control methods taught require a certain amount of time. They work better for people responding to a violent situation than people who are attacked. There are a few tricks, but this is about reading a threat, not controlling yourself.

Someone engaged in social violence generally won't try to hide his adrenaline. It's part of the show. The two groups that will try to hide it are criminals and professionals.

Professionals (like bouncers lighting cigarettes mentioned above) tend to have elaborately relaxed body language. Their job is to defuse the situation if at all possible, so they will close distance and get in position while giving relaxed and non-threatening body language. They will be focused on the threat, however. If you see someone who should be showing the signs and isn't and they are focused, assume you have a professional. (As opposed to someone who should be adrenalized and is oblivious, in which case you have your basic nitwit.)

Criminals have to close the distance and set you at your ease. They have to appear NOT to be focused on you and they want to control the adrenaline. Many will engage in self-calming behavior. When your kids are hurt or afraid you pick them up and hug them, right? You basically pet them like small animals. Self-calming is doing that solo. Rubbing the face or neck are the most common.

This probably goes at the end, but danger is in the matrix. When you see someone rubbing his neck and not making direct eye contact but looking at you it's a sign he is adrenalized and trying to control it. If you've known him for awhile (the social aspect of the matrix) he's probably working up his nerve to ask for a date. If he's a stranger? Hmmm. If he is a stranger standing at an abnormal range, with asocial feet alignment and no witnesses? Big red flag.

There is one more professional reaction, but not necessarily criminal. One of the things with criminals is that they can time when to attack, so they can control their own adrenaline. They can get themselves excited (with visualization, ritual or self-talk) to raise their adrenaline and they can get the adrenaline under control by waiting a little longer, breathing, or other self-calming behaviors.

Victims don't get that choice. When the threat arises, they get an adrenaline dump. If YOU are a force professional (LEO, soldier, bouncer) your job will be to accost people. From their point of view, you are the threat. You will use the same techniques bad guys use to control your own adrenaline (and, hopefully, more consciously, trained and taught and more effectively.) But the people you confront will not have that option. They will get an adrenaline dump.

If they go pale, things are on the edge of going bad.
If, however, the subject goes pale and relaxes and his eyes unfocus, you may be in for a very bad day. Most people tense and shrink up when the adrenaline hits hard. If you see the relax and the thousand yard stare you have stumbled on someone with extensive experience with adrenaline. He knows how to use every last drop of it. If you see this you may well be in for the fight of your life.

On the good side, if you see this the subject is still thinking clearly enough you can reason. You can rarely do that with the ones who go white and tense up.

Social scripts:
Along with the rules I'm coming to understand that there are scripts as well. Off and on over the years I've described several of them here without realizing it. The bad guy gets angry and expects you to either show fear or get angry back; the basic monkey dance; threat displays and dominance games...

A lot of the successful de-escalations have been by refusing to play the role or follow the script that was expected.

It would be easy to say that the scripts are social, but watching body language, even non-human primates do very similar things. One pushes and gets aggressive and loud, the other responds. Often both look to the bystanders to see if public approval is on one side or the other. Often they rely, just like people, on being separated by friends, getting to feel like they stood up without the danger of actually getting hurt.

When you start looking for it, you see this monkey behavior everywhere. And it seems so petty. There is great power in seeing the game and choosing not to play- it is almost a superpower to be able to focus on the problem and ignore the social mine-field surrounding it.

On Peace and Rehabilitation ... ation.html
You can't effectively work for peace without taking a good hard look at war or violence or conflict (or all three, depending on your definition). And not a knee-jerk, disapproving look, either. A good hard look at why, if something is so bad, it is so prevalent. Why, if something must be fixed, it is so endemic in the natural world.

It is exactly like any other group attempting to censor or ban any other thing. Prohibition was an ideal, largely put forward by self-righteous teetotalers. People talk about violence, it seems to me, the way that they talked about sex in the fifties. They don't. Most talk around it. If you have anything to say from experience, you are marginalized.

It kills dialogue. More to the point, it kills progress. Medicine advances as we learn more about disease. We solve problems by studying problems, not by meditating on an imaginary, problem-free end state. I guess, in a way, that is the defining difference between a peace-maker and a peace activist.

So let's get crime out of the way first. Not enough people look at it right. Crime fighting is an ideal, just like peace. And we won't make progress until we take a good hard look at why crime is prevalent. Which means acknowledging that it works. It satisfies needs. It's not just that there is little opportunity for honest employment in certain areas. There are damn few jobs, much less entry-level jobs, where you can make thousands of dollars a week, get automatic deference and an instant family.

Crime fighting is an attempt, instead of lowering the rewards of the criminal lifestyle, to raise the risks. Catch 'em, book 'em, hard time. You have to take a look, a hard look at whether that is a risk or even a punishment in this subculture... or just the way rugby players think about the occasional injury.
There is nothing broken. For the most part (possible mental illness and stuff aside) the serious criminal is not incomplete. There is no pathology. He is perfectly adapted for his world. The things that we think of as normal and good, the things we try to instill when we rehabilitate, might be profoundly dangerous behaviors when he goes back to his old haunts and sees his old friends.

We pretend we are fixing a person, but in reality we are trying to reshape him into a person that makes us more comfortable. Altering a human for our purposes, not his. In the process making him more likely to die in his natural environment and he damn well knows it.

The few people I know who have truly rehabilitated themselves, started by deciding they wanted to live in the non-criminal world. That's rare. If you become an adult in almost any environment, that becomes your comfort zone.
Raised in an environment where reading and manipulating people are far more valuable skills than getting along, the average criminal is better at reading and manipulating the people trying to 'help' or 'fix' than all but the best therapists. When you have consistently conned PhDs and psychiatrists, the best that civilized training can produce, it's natural to feel superior.
the product of a peaceful (and in his eyes weak) world on your knees begging to give him what he wants. He can't help but see that as the weak trying to make everyone else weak to feel safer.

Domination and Power: ... s-and.html
First thought: Violence is about dominance. Dominance (a very big thing with many levels) is the primary; violence a tool to achieve it. There are so many ways to dominate others that are lacking in violence or only offer the faintest hint of threat- but they still are an attempt to force someone to submit, to show not just respect but acquiescence. The passive-aggressive co-worker who stalls production or grumbles just out of hearing is just as coercive as any other kind of bully, only too insecure to expose himself to the risks attendant with just saying his heart. There is no such thing as a peaceful protest- just under the surface is the threat that this mass, these numbers could turn into a mob if they do not get what they want, and skillful protest organizers know that their power comes from fear and they use it. It is a very dark thing to watch people march and scream for peace and love. "Political correctness"- threats of litigation and disciplinary action under the guise of compassion and tolerance. And what is this tolerance where any ethnicity (or creed or gender or orientation or...) is encouraged but only one opinion is allowed? The essence of the entitlement mindset is "I should have the power to do anything I want, that's freedom, but you should be prevented from responding in any negative way because that is oppression." Reverse it and obviously the math doesn't work. It can be a very free-hearted, smiling evil.

This dominance is dark stuff and subtle and pervasive. For some people it is darkly addictive, too. To make someone else do what you want is power. Whether the coercion comes from whining or claiming a victim status or passive aggressive bullshit or naked violence it is power.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Fri Jul 05, 2013 5:30 pm

On Narcissism, Co-Dependency and Normal People With A Game Theory Example
A true manipulator will always look at the game by assuming that the person he or she is playing against will compete. Competing is the manipulator’s natural mind-set…
People who play with the manipulator’s mind-set believe that everyone will automatically play to win – or to maximize gain and minimize loss on each turn – by playing competitively. However, this option will only work best for the competitive player when the opponent plays cooperatively: The competitor gets $20, and the cooperator gets $0.

Manipulators always play the competitive move. When they first sit down to play with an opponent, they make the competitive move. Sometimes their opponent will make a cooperative move on the first try; sometimes he will not. However given that the manipulator continues to play competitively, the originally cooperative opponent has no choice but to change his tactics into also being a nontrusting competitor. In this way, the opponent will improve his score by $1 (up from $0) and in so doing also reduce the manipulator’s score to $1.

On the other hand, consider the experience of people who examine the matrix and choose on their first move to play cooperatively, trusting the other player to also cooperate so that each gets $10 on every move. If both players do play cooperatively, over 10 moves, each will accrue $100. As long as both players continue to play cooperatively – that is, by trusting one another – their gain will be guaranteed over the course of the game.

However, if a cooperative person gets burned by a competitive person on the first few rounds, the only option open to the trusting person is to switch strategies and become competitive too – just as a defense.

What ends up happening according to studies of Prisoner’s Dilemma is that cooperators end up having more diverse experiences when playing the game over time and with many different opposing players. Over a series of games, a cooperator may cooperate with a specific player who is also cooperative and be rewarded for the experience. Both of them walk away happy. In another series of games, he may play against someone who is always competitive and be forced to shift to a competitive strategy also, simply as a defense. At the end of the experience of playing all those series of games against different players, when the cooperators are asked to summarize their feelings, they say that the game, like the world and life, is full of different types of people, good and bad, cooperative and competitive.

When a competitor is asked the same question, however, the response is different. The competitor almost always have the experience where the person he’s playing ends up playing like a selfish competitor also. This happens because his playing style, for reasons already described, ends up converting his opponent to a competitive strategy, yet he is not the type of person to allow himself to be converted to a cooperative strategy by the other person’s cooperative play. Thus, his experiences dealing with others are not as varied. His own behavior actually created the exact world he predicted: one where everyone in the long run is always out for themselves, no matter how they may pretend otherwise in the beginning, therefore others aren’t to be trusted. He then uses this to justify his “get them before they get you” worldview.

The funny thing is, both the cooperator and competitor are correct in their worldviews. The cooperator thinks “Some people are good and cooperative and some people are bad and manipulative and you can’t treat everyone the same.” And what he’s saying is true for someone with his worldview and lifestyle. The competitor thinks “No matter how people act in the beginning, everyone is out for themselves and will show themselves to be selfish over time.” And what he’s saying is also true, for someone with his worldview and lifestyle. Even though they are both describing a different worldview, they are ironically both right about how the world is, at least for themselves specifically.

You may note I said that even though the competitor will change the cooperator into a fellow competitor, at least during the duration of their shared interactions, the reverse is usually not true. The cooperator, by cooperating, won’t change the competitor into a cooperator. Why is this so? The answer lies in the psychological phenomenon of projection, which is where a person believes other people have the same worldview and motivations behind their behavior that they do. For manipulators, they are only nice for two reasons. The first is because they are in a position of fear and weakness and have to cooperate for some reason or other, such as fear of losing something they already have, fear of not getting something, fear of retribution, fear of exposure, increase scrutiny by authorities, cowardice, etc. The second is because they want to lull the other player into a false sense of security to get them to lower their defenses and open themselves up to a bigger manipulation down the road. Because these are the only reasons they are ever nice or cooperative, when they see someone else being nice, they assume the other person is being nice for those same reasons: out of weakness or out of a desire to manipulate down the road. And of course, thanks to world-creation, when they end up inevitably converting the other player into a competitor, they are “proven” right and feel they called the whole situation correctly. Even more tragically, they typically never understand how their own distrustful behavior keeps creating distrust, manipulativeness, and rivalry in people they encounter.

Manipulators don’t like to knowingly deal in a long term context with other manipulators or realistic cooperators, because neither will repeatedly reward the manipulator’s selfish tactics. The only person who would keep playing with and rewarding a manipulator is an unrealistic cooperator. If the competitor in the prisoner’s dilemma example corresponds to manipulators in real life relationships, then unrealistic cooperators correspond to codependents in real life relationships. Unrealistic cooperators/codependents, much like manipulators, tend to project their own worldviews and belief systems onto others and interpret the behaviors of others accordingly. So when they see a person behaving selfishly, they think that person has never been shown goodness or cooperation, or has been unfairly burned by others, because that is the only reason why they themselves would ever act so consistently bad. So they believe by showing excessive cooperation, even in the face of repeated selfish, competitive moves, they can end up changing the manipulator. The unrealistic cooperator can’t believe that another person can remain selfish, manipulative, and competitive in the face of consistent cooperation because the unrealistic cooperator himself couldn’t act selfish and manipulative when faced with consistent cooperation.

Because of the dynamic we described above, though, the unrealistic cooperator’s behavior ends up having the exact opposite effect on the manipulator. The manipulator is a person who only acts cooperative when he feels he is in a position of weakness and helplessness, or when he is getting a target’s defenses down in order to set them up for a future selfish, competitive move down the line. When the manipulator projects his mindset into the unrealistic cooperator, and realizes that the unrealistic cooperator is never going to do a future selfish move, and therefore is not setting him up for a manipulation down the line, the manipulator then assumes that the unrealistic cooperator must then be chronically weaker than and fundamentally inferior to the manipulator.
Bullshit is more concerned with whether or not the statement is believable rather than whether it’s true. Bullshit is more concerned with the appearance of a statement, and what impressions it gives to the listener about the speaker, rather than the content and veracity of the statement. Therefore, bullshit is more concerned with identity, appearances, impression management, context, and image. A lie on the other hand is more concerned with actions, content, and substance. A liar is trying to mislead you about the truth-value of their statements. A bullshitter on the other hand is not trying to deliberately mislead you about the truth-value of their statement, because they themselves often don’t really know how true or false their own statements are. Instead, they are trying to mislead you about how knowledgeable they are on a particular topic.

For example, let’s say I am at a party at a mansion and there’s a fancy-looking vase on display. There is a beautiful girl standing next to me who I want to impress with my worldliness, and she asks me if I know what type of vase that is. I want to make a good impression on her, so I tell her the vase is probably a 15th century Ming, but in truth I know little about vases and am just saying whatever comes to mind that sounds plausible. I have no idea whether the vase is really a Ming or not, nor do I care much. What I really care about is presenting an image of myself a worldly person. This is bullshitting.

Let me clarify what I mean when I say a bullshitter doesn’t care about the truth. A bullshitter may care about the truth to a small degree, to the extent that the truth will enhance or ruin his image or expose his fraudulent identity. So if his statement ends up being fraudulent, the bullshitter is concerned about that not coming to light and ruining the phony image of himself as an expert he is trying to sell the listener on. And if his statement ends up being true, he views that as a bonus in helping him sell that phony expert image, so he wouldn’t mind that truth come to light. So he is to an extent concerned about the truth, but it’s always secondary to his image and identity, which are his primary concerns.
Telling a lie is an act with a sharp focus. It is designed to insert a particular falsehood at a specific point in a set or system of beliefs, in order to avoid the consequences of having that point occupied by the truth. This requires a degree of craftsmanship, in which the teller of the lie submits to objective constraints imposed by what he takes to be the truth. The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of that truth.

On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a certain falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared, so far as required, to fake the context as well. This freedom from the constraints to which the liar must submit does not necessarily mean, of course, that his task is easier than the task of the liar. But the mode of creativity upon which it relies is less analytical and less deliberative than that which is mobilized in lying. It is more expansive and independent, with more spacious opportunities for improvisation, color, and imaginative play.

And usually, his purpose is to sell you on himself. The conversation with a bullshitter is rarely about whatever subject matter it seems to be about on the surface. The conversation is really about his perceived identity and giving off the impression that he is of high value, rather than being about the supposed conversational topic.

HBD or Human Bio-Diversity is an interesting theoretical concept that in practice is about status jockeying and bullshitting via test scores. ... cult-prt-1
So with guilt, other people don’t matter as much because if you did something wrong, that’s all that matters in determining whether or not you feel bad about yourself, regardless of whether a bunch of other people also did the same wrong thing. With shame, though, since the issue is defectiveness, and defectiveness is judged by comparison to the norm, what others do is just as important if not moreso than what you yourself do. This is why shame-based people spend incredible amounts of time trying to point out the defectiveness of others, because even if you feel powerless to change who and what you are via your actions, if you can prove others are as bad or worse, you can still ward off feelings of shame.

Race realism, or HBD, is a movement where a bunch of toxically shame-based white men created a cult of collective narcissism, a support group where members help other members deal with the low self-esteem created by their extreme shame issues. They support each other in their efforts to overcompensate against the shame. The main proof that they are shame-based comes from how much they fanatically believe in genetic determinism, the idea that one is either born great or born defective, and that hard work can do nothing to change that. They are constantly trying to argue that greatness comes from racial background, gender, country of origin, genetic makeup, and other things that one is born with and can’t change with actions.

They collaborate together to do all the narcissistic defenses necessary to maintain the overcompensation. They team up to mentally masturbate, project their insecurity and shame onto other groups, to blame other groups, to repress self-loathing, engage in extreme fantasy, acting out, displacement. As I explained earlier, showing other people are equally or more defective is very important to a shame-based person, so just as much if not more time is spent on HBD blogs tearing down and examining other racial groups as is spent on tooting their own horns. This is why they are endlessly fixated on what other groups do, because shame by its very nature is very other-oriented.

The cult fills their self-esteem needs by pursuing three main goals:

(1) Convince themselves and others that despite their unremarkable, mundane daily lives, they are in fact remarkable, superior people who are only out for fairness and justice.

(2) Convince themselves and others that the reason they have unremarkable, mundane daily lives despite being such remarkable, superior people is because of external circumstances, be it minorities, women, liberals, elites, government, and various conspiracy theories, and that in a just world, a meritocracy, they would be greater than they are and in charge of utopia.

(3) Convince themselves and others that there is a real-world revolution brewing and they are not only at the forefront of it but are gaining ground and on the road to winning it, and all their enemies are running scared. This is an illusion of action and activism designed to disguise both from themselves and others the fact that they’re actually a self-esteem support group for shame-filled white men and don’t actually have any real-world political goals or action plans beyond just being a support group. ... cult-prt-2
We’ve found that when people have a fixed mindset they often shy away from challenges. For them, deficiencies are permanent and so they are afraid to reveal them. People with fixed mindsets are also not as resilient in the face of setbacks because, again, they see setbacks as impugning their underlying abilities. Challenge-seeking and resilience are key factors in success. As a result, people with fixed mindsets often don’t achieve as much in the long run.

People with a growth mindset don’t necessarily believe everyone is the same or that anyone can be Einstein, but they understand that everyone can develop their abilities and that even Einstein wasn’t Einstein until he put in decades of dedicated labour. These people see a challenge as something that helps you learn, and a setback as something that ultimately helps develop your ability. For this reason, people with a growth mindset often accomplish more in the long term…

Each mindset creates a whole psychological world or a “meaning system” for people. It’s called a “meaning system” because mindsets change the meaning of what happens to us. First, as I’ve suggested, the mindsets change the meaning of challenges. In a fixed mindset, a challenge is threatening because it can reveal deficiencies. In a growth mindset, a challenge is an opportunity to get better at something. Next, mindsets change the meaning of effort. Those with a growth mindset think if you have natural ability you shouldn’t need that much effort. Their belief is that things should come easily to people if they’re really smart. But those with a growth mindset understand that even geniuses have to work hard for their great discoveries and that effort, well-applied, will increase your abilities over time. Finally, mindsets change the meaning of failure. In a fixed mindset a failure is the worst thing that could happen. It discredits your ability, it’s something to run from, something to hide and even, we find in our research, to lie about. But in a growth mindset failure, while not welcome, is something you learn from.

What’s very interesting is that if you read a lot of the research on shame, and compare it to the findings of the line of research developed by Dweck and her peers, you see that a lot of it corresponds. The kids who identify with being smart, who make high IQ into their identity, are shame-based. They are focused on what their inherent identity is, and live life as if they’re trying to make a case proving that identity is true and always comparing themselves to others. The kids who identify with being hard workers, who focus on the quality of their action, are guilt-based and focus more on comparing their current selves to their past selves than to other people.

Another sign that the fixed mindset corresponds to shame is that when the entity theorists face an ego threat and overcompensate against it, they end up behaving very narcissistically, and as I described in the last installment, narcissism is an overcompensation against shame. A key element of narcissism is rigidity, and one can argue that rigid is another word for fixed. Fixed mindset entity theorists, like narcissists, become more interested in protecting their image of intelligence and avoiding any effort that carries with it a risk of failure that will ruin that superior image. They also become obsessed with comparing themselves to others and making them look worse, so that their own images can look better [emphasis added by me]:
Dweck’s research on overpraised kids strongly suggests that image maintenance becomes their primary concern—they are more competitive and more interested in tearing others down. A raft of very alarming studies illustrate this.

In one, students are given two puzzle tests. Between the first and the second, they are offered a choice between learning a new puzzle strategy for the second test or finding out how they did compared with other students on the first test: They have only enough time to do one or the other. Students praised for intelligence choose to find out their class rank, rather than use the time to prepare.

In another, students get a do-it-yourself report card and are told these forms will be mailed to students at another school—they’ll never meet these students and don’t know their names. Of the kids praised for their intelligence, 40 percent lie, inflating their scores. Of the kids praised for effort, few lie.

When students transition into junior high, some who’d done well in elementary school inevitably struggle in the larger and more demanding environment. Those who equated their earlier success with their innate ability surmise they’ve been dumb all along. Their grades never recover because the likely key to their recovery—increasing effort—they view as just further proof of their failure. In interviews many confess they would “seriously consider cheating.”

In one study, seventh graders told us how they would respond to an academic failure—a poor test grade in a new course. Those with the growth mindset, no big surprise, said they would study harder for the next test. But those with the fixed mindset said they would study less for the next test. If you don’t have the ability, why waste your time? And, they said, they would seriously consider cheating! If you don’t have the ability, they thought, you just have to look for another way.

What’s more, instead of trying to learn from and repair their failures, people with the fixed mindset may simply try to repair their self-esteem. For example, they may go looking for people who are even worse off than they are.

College students, after doing poorly on a test, were given a chance to look at tests of other students. Those in the growth mindset looked at the tests of people who had done far better than they had. As usual, they wanted to correct their deficiency. But students in the fixed mindset chose to look at the tests of people who had done really poorly. That was their way of feeling better about themselves.

Dweck, Carol (2006-02-28). Mindset: The New Psychology of Success (pp. 35-36). Random House, Inc.. Kindle Edition.

I point out the shame and guilt research and the mindset research not just to psychoanalyze why HBDers are the way they are, although admittedly I find that important. I also point out these findings to challenge the idea that widespread acceptance of their ideas would create a better society. They think it would make them less angry and feel better, but it would just train a whole generation to develop the same narcissistic frustration, bitterness, self-sabotaging, and blaming tendencies. By encouraging shame-based, incremental theory growth mindsets, they would just create more similarly discontented people. Furthermore, the shame and growth research offers much plausible proof that their beliefs about effort being worthless in increasing intelligence and innate ability being all-important is simply wrong. ... ult-part-4
Recently, there was a controversial Superbowl commercial by Volkswagen about Jamaicans. Many African-Americans found it to be racist. Also, many White Americans also found it to be incredibly racist, especially White liberals. However, what was really interesting is that Jamaicans themselves by and large didn’t find it to be racist at all. They mostly just laughed it off. News outlets kept reporting that Jamaicans found the ad hilarious. Athlone McGuinness, who is of Jamaican descent, also weighed in on how the ad wasn’t racist. All my Jamaican friends felt similarly.

But what it did show was how engrained stereotype obsession is in America, both among whites and among blacks. Americans, both white and black, spend a lot of mental energy either consciously avoiding any discussion of stereotypes, reveling in discussing stereotypes, or overcompensating against stereotypes. When it comes to being cognizant and hyperaware of racial stereotype concerns, African-Americans are surprisingly more similar to White Americans than they are to Caribbeans.

Victim Olympics - Who Can Score The Most Points?

Simply put, Superior Inferiority is when a shame-based person plays games with the aim of proving the superiority of their particular brand of inferiority. The person is out to prove that they have it worse than anyone else on earth who has it bad. It can also be called comparative victimology. In past posts, I’ve discussed how once a person realizes they’ve had subservient tendencies or bad boundaries, it can be seductively tempting to make those traits into their identity and define themselves by them.
You mean a sufferer can be narcissistic? Absolutely! For the sufferer, anguish is usually the only focus, the only awareness that makes them unique. Personal identity is constructed around being in pain, or being a victim, or being a survivor. Pain justifies a pervasis self-focus, with parasitic demands and exploitive relationships…

A sufferer often carries around a lot of emotional baggage, but letting go of the past is not an option. Without this history, the sufferer would lose his or her grounding for self-pity. In fact, the art of self-pity is perfected and provides an endless source of raw material.

Naturally, this pain is not ordinary pain. The narcissistic pain of the sufferer is laced with self-important features. “No one has suffered as I have suffered” is this narcissist’s only consolation. There may even be a transcendent dimension with religious meaning to this suffering: God sanctions the pain…

It is important to make a distinction between healthy and unhealthy pain. What needs to be faced may be painful, but this is the way of growth. In contrast, avoiding necessary pain leads to what’s been called “dirty pain.” The sufferer is a master at this kind of endless self-defeating misery. This is the narcissist’s defense against experiencing [healthy] legitimate pain [the kind that is more challenging but leads to more genuine growth – T.] while at the same time getting the attention they feel they deserve.

штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby FourthBase » Sat Apr 05, 2014 8:30 pm

Holyfuckingshitthisthread :O
“Joy is a current of energy in your body, like chlorophyll or sunlight,
that fills you up and makes you naturally want to do your best.” - Bill Russell
User avatar
Posts: 7057
Joined: Thu May 05, 2005 4:41 pm
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby Iamwhomiam » Thu Apr 24, 2014 10:50 am

Indeed it is, 4thB. My compliments, General Patton. I rarely venture into the dump, but there's much of interest and value here. I'll have more to say about a few of your more recent contributions later, after some considerable reading and contemplation. But for now I can say, thank you.
User avatar
Posts: 6543
Joined: Thu Sep 27, 2007 2:47 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby Wombaticus Rex » Thu Jan 29, 2015 10:23 pm


The Iron Law Of Intelligence

As luck would have it, I am myself a machine that thinks, so I will share the special insight this gives me with those of you who don't share my good fortune. To dispense with vestigial metaphysical objections, we know that machines that think like humans are possible, because they have been overrunning the landscape for millenia. If we now want human-like intelligences that are made, not begotten, then it will be extraordinarily useful to achieve an understanding of the human-like intelligences that already exist—that is, we need to characterize the evolved programs that constitute the computational architecture of the brain.

Not only has evolution packed the human architecture full of immensely powerful tricks, hacks, and heuristics, but studying this architecture has made us aware of an implacable, invisible barrier that has stalled progress toward true AI: the iron law of intelligence. Previously, when we considered (say) a parent and child, it seemed self-evident that intelligence was a unitary substance that beings had more or less of, and the more intelligent being knows everything that the less intelligent knows, and more besides. This delusion led researchers to think that the royal road to amplified intelligence was to just keep adding more and more of this clearly homogeneous (but hard to pin down) intelligence stuff—more neurons, transistors, neuromorphic chips, whatever. As Stalin (perhaps) said, Quantity has a quality all its own.

In contrast, the struggle to map really existing intelligence has painfully dislodged this compelling intuition from our minds. In contrast, the iron law of intelligence states that a program that makes you intelligent about one thing makes you stupid about others. The bad news the iron law delivers is that there can be no master algorithm for general intelligence, just waiting to be discovered—or that intelligence will just appear, when transistor counts, neuromorphic chips, or networked Bayesian servers get sufficiently numerous. The good news is that it tells us how intelligence is actually engineered: with idiot savants. Intelligence grows by adding qualitatively different programs together to form an ever greater neural biodiversity.

Each program brings its own distinctive gift of insight about its own proprietary domain (spatial relations, emotional expressions, contagion, object mechanics, time series analysis). By bundling different idiot savants together in a semi-complementary fashion, the region of collective savantry expands, while the region of collective idiocy declines (but never disappears).

The universe is vast and full of illimitable layers of rich structure; brains (or computers) in comparison are infinitesimal. To reconcile this size difference, evolution sifted for hacks that were small enough to fit the brain, but that generated huge inferential payoffs—superefficient compression algorithms (inevitably lossy, because one key to effective compression is to throw nearly everything away).

Iron law approaches to artificial and biological intelligence reveal a different set of engineering problems. For example, the architecture needs to pool the savantry, not the idiocy; so for each idiot (and each combination of idiots) the architecture needs to identify the scope of problems for which activating the program (or combination) leaves you better off, not worse. Because different programs often have their own proprietary data structures, integrating information from different idiots requires constructing common formats, interfaces, and translation protocols.

Moreover, mutually consistent rules of program pre-emption are not always easy to engineer, as anyone knows who (like me) has been stupid enough to climb halfway up a Sierra cliff, only to experience the conflicting demands of the vision-induced terror of falling, and the need to make it to a safe destination.

Evolution cracked these hard problems, because neural programs were endlessly evaluated by natural selection as cybernetic systems—as the mathematician Kolmogorov put it, "systems which are capable of receiving, storing and processing information so as to use it for control." That natural intelligences emerged for the control of action is essential to understanding their nature, and their differences from artificial intelligences. That is, neural programs evolved for specific ends, in specific task environments; were evaluated as integrated bundles, and were incorporated to the extent they regulated behavior to produce descendants. (To exist, they did not have to evolve methods capable of solving the general class of all hypothetically possible computational problems—the alluring but impossible siren call that still shipwrecks AI labs.)

This means that evolution has only explored a tiny and special subset out of all possible programs; beyond beckons a limitless wealth of new idiot savants, waiting to be conceived of and built. These intelligences would operate on different principles, capable of capturing previously unperceived relationships in the world. (There is no limit to how strange their thinking could become).

We are living in a pivotal era, at the beginning of an expanding wave front of deliberately engineered intelligences—should we put effort into growing the repertoire of specialized intelligences, and networking them into functioning, mutually intelligible collectives. It will be exhilarating to do with nonhuman idiot savant collectives what we are doing here now with our human colleagues—chewing over intellectual problems using minds equipped interwoven with threads of evolved genius and blindness.

What will AIs want? Are they dangerous? Animals like us are motivated intelligences capable of taking action (MICTAs). Fortunately, AIs are currently not MICTAs. At most, they are only trivially motivated; their motivations are not linked to a comprehensive world picture; and they are only capable of taking a constrained set of actions (running refineries, turning the furnace off and on, shunting packets, futilely attempting to find wifi). Because we evolved with certain adaptive problems, our imaginations project primate dominance dramas onto AIs, dramas that are alien to their nature.

We could transform them from Buddhas—brilliant teachers passively contemplating without desire, free from suffering—into MICTAs, seething with desire, and able to act. That would be insane—we are already bowed under the conflicting demands of people. The foreseeable danger comes not from AIs but from those humans in which predatory programs for dominance have been triggered, and who are deploying ever-growing arsenals of technological (including computational) tools for winning conflicts by inflicting destruction.


Machines Mostly Steal Thoughts But Open A New Era Of Exploration

Man-made machines increasingly do things we previously considered thinking, but don't do anymore because now machines do them. I stole this recent thought more or less accurately from Danny Hillis, father of the Connection Machine and the Knowledge Graph. Stealing thoughts is a common activity in thought processes of both humans and machines. Indeed, when we humans are thinking, much of the content of our thoughts is coming from past experience or the documented experience of others. Very rarely we come up with something completely new. Our machines are not much different. What is called cognitive computing is in essence nothing else but a very sophisticated thought stealing mechanism, driven by a vast amount of knowledge and a complicated set of algorithmic processes. Such thought stealing processes, in both human(istic) thought and cognitive computing, are impressive, as they are not only capable to steal existing thoughts, but also potential thoughts that are reasonable or likely, based in a given corpus of knowledge.

Today, thought stealing machines can produce scholarly texts that are indistinguishable from "post-modern thought," computer science papers that get accepted in conferences, or compositions that experts cannot disambiguate from originals by classical composers. Like in weather forecast, machines are now capable to produce many different cognitive representations based on expectations derived from documents about the past or similar situations. Renaissance antiquarians would be delighted, as these machines are a triumph of the very methods that gave rise to modern archaeology and many other branches of science and research. But how impressed should we really be?

Our machines get more and more sophisticated, and so do their results. But, as we build better and better machines, we also learn more and more about nature. In fact, natural cognition is likely much more complex and detailed than our current incarnations of artificial intelligence or cognitive computing. For example, how sophisticated do we have to imagine natural cognition, when quantum coherence at room temperature can help common birds in our garden to sense the magnetic field? How complex do we have to imagine embodied cognition in common octopi, when it is possible to build Turing machines that are made exclusively out of artificial muscles? How should we answer these questions, when we are still very far from recording in full detail what is going on in our brains? My guess is, in 200 years our current thinking machines will look as primitive as the original mechanical Turk.

However sophisticated they may become, compared to the resolution and efficiency of natural cognition, our machines are still primitive. Similar to proto-biotic metabolism, our machines are below a critical threshold to real life. But our machines are powerful enough that we can enter a new era of exploration. Our machines allow us to produce many more thoughts than ever produced before, with innovation becoming an exercise of finding the right thought in the set of all possible thoughts. As much as having our own ideas, ingenuity will lie in the proper exploration of such ready-made sets of thought. Measuring the cognitive space of all possible thoughts will be as awe-inspiring as the exploration of the universe by astronomy. Maybe Mahler's potential 60th is as awesome as his 6th.

c/f Aphex Twin via: ... phex-twin/

...I’ve actually recently hired a Chinese programmer to make a music software for me. It’s taking the concept of mutation into music software. You give the program some sounds you made and then it gives you six variations of it and then you choose the one you like most and then it makes another six and it kind of keeps trying to choosing the variations by itself. It’s a bit like that, but more advanced, but basically it starts with a sound, analyzes it, then does different versions of variations. It randomizes, it compares all of them to the original and then it picks the best one. It sounds totally awesome, but it needs to be tweeked a little bit. I will continue with this. I have a whole book full of ideas for software and instruments.”
User avatar
Wombaticus Rex
Posts: 10780
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby Grizzly » Fri Jan 30, 2015 7:42 am

:blankstare Super heady stuff' anybody wanna do a ELI5? Or a TL/DR? I know it`s important' but it fry's my little cranium ...
“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Posts: 3876
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby General Patton » Fri Oct 02, 2015 2:01 am

Grizzly » Fri Jan 30, 2015 6:42 am wrote::blankstare Super heady stuff' anybody wanna do a ELI5? Or a TL/DR? I know it`s important' but it fry's my little cranium ...

Structure can fix when you know that you know something but can't figure out exactly what it is that you know.
штрафбат вперед
User avatar
General Patton
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Structured Thinking : Analysis, Exploration, Exploitatio

Postby Wombaticus Rex » Sun Oct 11, 2015 10:19 am

Some wild details at the end of this PDF: ... 012int.pdf

But more importantly, their 2013-era approximation of a general consensus meta-synthesis fuckball:

We review new findings and new theoretical developments in the field of intelligence. New findings include the following:

(a) Heritability of IQ varies significantly by social class.
(b) Almost no genetic polymorphisms have been discovered that are consistently associated with variation in IQ in the normal range.
(c) Much has been learned about the biological underpinnings of intelligence.
(d) “Crystallized” and “fluid” IQ are quite different aspects of intelligence at both the behavioral and biological levels.
(e) The importance of the environment for IQ is established by the 12-point to 18-point increase in IQ when children are adopted from working-class to middle-class homes.
(f) Even when improvements in IQ produced by the most effective early childhood interventions fail to persist, there can be very marked effects on academic achievement and life outcomes.
(g) In most developed countries studied, gains on IQ tests have continued, and they are beginning in the developing world.
(h) Sex differences in aspects of intelligence are due partly to identifiable biological factors and partly to socialization factors.
(i) The IQ gap between Blacks and Whites has been reduced by 0.33 SD in recent years.

We report theorizing concerning
(a) the relationship between working memory and intelligence,
(b) the apparent contradiction between strong heritability effects on IQ and strong secular effects on IQ,
(c) whether a general intelligence factor could arise from initially largely independent cognitive skills,
(d) the relation between self-regulation and cognitive skills, and (e) the effects of stress on intelligence.
User avatar
Wombaticus Rex
Posts: 10780
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)


Return to Data & Research Compilations

Who is online

Users browsing this forum: No registered users and 1 guest