Page 1 of 2

Recorded Future, Data Mining, Intelligence Agencies

PostPosted: Fri Aug 26, 2011 3:56 pm
by Wombaticus Rex
Image

Specifically, Recorded Future is a formerly private tech company based out of Cambridge, MA who specialize in "Temporal Analytics." I did some searching and was surprised I couldn't find any specific thread about the recent(ish) news that both Google and the CIA's tech investment firm In-Q-Tel had bought, heavily, into their project...here's a quick background primer first:

http://www.wired.com/dangerroom/2010/07 ... google-cia

Exclusive: Google, CIA Invest in ‘Future’ of Web Monitoring

The investment arms of the CIA and Google are both backing a company that monitors the web in real time — and says it uses that information to predict the future.

The company is called Recorded Future, and it scours tens of thousands of websites, blogs and Twitter accounts to find the relationships between people, organizations, actions and incidents — both present and still-to-come. In a white paper, the company says its temporal analytics engine “goes beyond search” by “looking at the ‘invisible links’ between documents that talk about the same, or related, entities and events.”

The idea is to figure out for each incident who was involved, where it happened and when it might go down. Recorded Future then plots that chatter, showing online “momentum” for any given event.

“The cool thing is, you can actually predict the curve, in many cases,” says company CEO Christopher Ahlberg, a former Swedish Army Ranger with a PhD in computer science.

Which naturally makes the 16-person Cambridge, Massachusetts, firm attractive to Google Ventures, the search giant’s investment division, and to In-Q-Tel, which handles similar duties for the CIA and the wider intelligence community.


Needless to say I've had an abiding interest in these folks ever since and my handy Alerts notified me of this white paper from the company on "Temporal Analytics." I found it to be an engaging read with surprising depth of reference -- some quality brainfood here!

READ: http://blog.recordedfuture.com/2010/03/ ... analytics/

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 4:15 pm
by Bruce Dazzling
I'll be shocked, SHOCKED if this CIA/Google project is used for anything other than the general good.

I started a thread on this Kevin Slavin presentation yesterday, but it kind of died on the vine.

It's relevant here, though.

Bruce Dazzling wrote:
Kevin Slavin argues that we're living in a world designed for -- and increasingly controlled by -- algorithms. In this riveting talk from TEDGlobal, he shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture. And he warns that we are writing code we can't understand, with implications we can't control.




Kevin Slavin is the Chairman and co-Founder of Area/Code. Founded in 2005, Area/Code creates cross-media games and entertainment for clients including Nokia, CBS, Disney Imagineering, MTV, Discovery Networks, A&E Networks, Nike, Puma, EA, the UK’s Department for Transport, and Busch Entertainment.

Area/Code builds on the landscape of pervasive technologies and overlapping media to create new kinds of entertainment. They have built mobile games with invisible characters that move through real-world spaces, online games synchronized to live television broadcasts, and videogames in which virtual sharks are controlled by real-world sharks with GPS receivers stapled to their fins. Their Facebook game “Parking Wars” served over 1 billion pages in 2008.

Before founding Area/Code, Slavin spent over 10 years in ad agencies including DDB, TBWA\Chiat\Day and SS+K, focused primarily on technology, networks, and community. His work has been recognized through many industry awards and press.

Area/Code’s work has received awards from the Clios, the One Club, Creativity, and many others, and the co-founders were recently named to the Creativity 50 and the Gamasutra 20. Slavin has spoken at the BBC, Ad Age, 5D, MoMA, the Van Alen Institute, the Guardian, DLD, the Cooper Union, the Storefront for Art and Architecture, and NBC, and together with Adam Greenfield he teaches Urban Computing at NYU’s Interactive Telecommunications Program. His work has been exhibited internationally, including the Design Museum of London and the Frankfurt Museum fuer Moderne Kunst.

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 4:32 pm
by Wombaticus Rex
^^Definitely, thank you.

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 4:48 pm
by hanshan
...


Wombaticus Rex wrote:Specifically, Recorded Future is a formerly private tech company based out of Cambridge, MA who specialize in "Temporal Analytics." I did some searching and was surprised I couldn't find any specific thread about the recent(ish) news that both Google and the CIA's tech investment firm In-Q-Tel had bought, heavily, into their project...here's a quick background primer first:

http://www.wired.com/dangerroom/2010/07/exclusive-google-cia



Exclusive: Google, CIA Invest in ‘Future’ of Web Monitoring

The investment arms of the CIA and Google are both backing a company that monitors the web in real time — and says it uses that information to predict the future.

The company is called Recorded Future, and it scours tens of thousands of websites, blogs and Twitter accounts to find the relationships between people, organizations, actions and incidentsboth present and still-to-come. In a white paper, the company says its temporal analytics engine “goes beyond search” by “looking at the ‘invisible links’ between documents that talk about the same, or related, entities and events.”

The idea is to figure out for each incident who was involved, where it happened and when it might go down. Recorded Future then plots that chatter, showing online “momentum” for any given event.

“The cool thing is, you can actually predict the curve, in many cases,” says company CEO Christopher Ahlberg, a former Swedish Army Ranger with a PhD in computer science.

Which naturally makes the 16-person Cambridge, Massachusetts, firm attractive to Google Ventures, the search giant’s investment division, and to In-Q-Tel, which handles similar duties for the CIA and the wider intelligence community.


Apparently, there has been, although not in the public domain, this capability for some time
( i.e., well before 911)


Needless to say I've had an abiding interest in these folks ever since and my handy Alerts notified me of this white paper from the company on "Temporal Analytics." I found it to be an engaging read with surprising depth of reference -- some quality brainfood here!



READ: http://blog.recordedfuture.com/2010/03/13/recorded-future-%E2%80%93-a-white-paper-on-temporal-analytics/

very cool, tx.

& tx BD, as missed Slavin first time 'round


...

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 6:05 pm
by DrVolin

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 6:11 pm
by hanshan
...

DrVolin wrote:Or the analog version:

http://www.soundtrackcollector.com/imag ... Condor.jpg



get a 403 Forbidden on that link (?)


...

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 6:51 pm
by Wombaticus Rex
Source: http://www.arnoldit.com/search-wizards- ... uture.html

Recorded Future, a privately-held firm, has distinguished itself in several ways. First, the company received financial support from In-Q-Tel, the investment arm of the US intelligence community and from Google, a company known for its voracious interest in next-generation technology. Second, the company has ignited the blogosphere with its fact-filled and informative posts on the firm Web log. Topics have included scanning the horizon of automobile technology which included eye-popping visualizations of “big data”. Third, the company has been the subject of considerable discussion by analysts, competitors, and legal experts.

On April 2, 2011, I spoke with Christopher Ahlberg, the founder of Recorded Future. The company is competing in what for lack of a better term a market sector I call “predictive analytics.” Part business intelligence and part content processing, Recorded Future processes a wide range of inputs, analyzes entities and other identified elements, and uses the content and metadata to fuel numerical processes. The object of the computationally intensive operations is to provide insights about likely outcomes. The name of the company, Recorded Future, evokes both traditional content processing and the next-generation techniques of predictive analytics.

Like Palantir, a data analytics company which made headlines after landing $90 million in venture funding, Recorded Future uses easy-to-grasp, high-quality graphical outputs. The system can output tables and hot linked results lists. Recorded Future recognizes that users want to “see” information in a context as well as have the ability to dig into the underlying data or explore a particular set of outputs through time. But what makes Recorded Future different is that interest from both In-Q-Tel and Google makes Recorded Future like a company that nails a million dollar contract and wins the Fields Medal on the same day.

I was able to talk with Mr. Ahlberg at his home base of Boston, Massachusetts. The development group of Recorded Future is in Sweden. The full text of my interview with Mr. Ahlberg appears below.

Thanks for taking the time to talk with me. What’s the driver for Recorded Future?

The founders behind Recorded Future were part of the same team that built Spotfire.

That’s a TIBCO company now, right?

Yes. It sounds as if you are familiar with Spotfire, which is a tool for visualizing and analyzing large sets of structured data. We ended up selling that company after building it to US$60 million, which we thought was a healthy business. After a short break, my colleagues and I started to think about what to do next.

What really stood out for us was the incredible richness of the Internet as a data source. Google and others certainly has shown how powerful indexes can show you the path to interesting documents. But what if we could make "the Internet" available for analysis.

So we set out to organize unstructured information at very large scale by events and time.

Can you give me an example?

Of course, a query might return a link to a document that says something like "Hu Jintao will tomorrow land in Paris for talks with Sarkozy" or "Apple will next week hold a product launch event in San Francisco"). We wanted to take this information and make insights available through a stunning user experiences and application programming interfaces. Our idea was that an API would allow others to tap into the richness and potential of Internet content in a new way.

Hakia has made quite an impact with its SENSEnews service. Is your system applicable to the financial community’s interests as well?

Absolutely. Let me provide you with some color on what our system can do. In quantitative analysis (for example in finance) we can prove the predictive power of our data (stock returns, volatility). The method applies to other areas as well. For instance, we think about this as providing data through user experiences for end users to do analysis. The key differentiator for our system is that the query and results can be about the past or the future.

Applications range from law enforcement to financial analysis to health and medical challenges.

When did you become interested in text and content processing?

That’s a good question. I've always had a keen interest in data analysis and visualization. Even back in 1993 as part of my PhD I worked on what was called the FilmFinder which took large amounts of textually oriented data (what's now IMDB) and allowed you to explore that data in a visual manner.

Later, when we were working on Spotfire in finance and government we had lots of interest in visualizing and analyzing textual data. - Some of this work required our working with outputs that were generated by a range of text analysis tools.

It struck me that if we could turn textual information into temporal events (through clever linguistics) we could organize data for analysis. I saw that if we actually built the whole stack as a service for people, we could do this in a really attractive fashion and solve some significant and difficult information problems for people.

Recorded Future has been investing in search, content processing, and text analysis for a number of years. What's the R&D strategy going forward?

We started thinking about how to do this in early 2009 and have had engineering staff (by now a pretty good group) on this for about 18 months. We've deployed our product to some of the absolutely most sophisticated analytic organizations in both finance and government. Also, we gained a whole range of users around the world. The goals we've set out has been quite ambitious: To allow analysts work with the "Internet as a source" and to succeed in actual financial predictions.

We know our technology delivers results, so we knew we were on a very promising path that few had traveled successfully. Thus, support of In-Q-Tel and Google has been extremely positive. It is good to have highly regarded organizations validate one’s methods. Each organization gives us access to quite sophisticated people in the domains in which we work.

For the technical work, we do not have any magic. There is just hard work and our desire to deliver results to our users and customers. Some people think that In-Q-Tel and Google have magic. But there is no magic, just more work and lots of problem solving. Some people think there is magic, but there is none. We benefit from both organization’s interest in and constructive criticism of our system.

Many vendors argue that mash ups are and data fusion are "the way information retrieval will work going forward? I am referring to structured and unstructured information. How does Recorded Future perceive this blend of search and user-accessible outputs?

Yes, I think it's very true that we live in a time when data can and should come together better than ever before. Now, that doesn't mean that that's easy.

Architecturally we try to address this by building our user interfaces on the very same API that we provide to customers and partners so that we force ourselves to have a very standard, transparent way of accessing our data. We document this publicly.

Our customers are very interested in mashing up data from different sources. I think you call this “data fusion” in your Inteltrax.com blog. We want to mash up data and applications, not just data.

Would you give me an example?

Of course, this might be integrating our timeline visualizations with geospatial applications. Alternatively we integrate our data on corporations through identifiers such as stock tickers with external equity pricing/returns data. We try to prepare for these scenarios. We've published open examples; for instance, http://www.predictivesignals.com/2010/1 ... ytics.html, with our data loaded in Google Spreadsheets, R, etc. The Recorded Future data will gain in value when our system becomes more pervasive.

Without divulging your firm's methods, will you characterize a typical use case for your firm's content processing, tagging, and search and retrieval capabilities?

We have two primary use cases. One is end users doing analytic research. This use case can be as simple as "I'm looking to buy a large block of Apple shares. Find me upcoming events, scheduled and unscheduled/speculative for Apple over next 12 months so that I can weigh these external catalysts in to my analysis".

The other major use case is integrating our data into quantitative analysis. For instance, an an analyst may for example have an equity or commodity pricing model and would like to weigh in events and time as a factor.

What are the benefits to a commercial organization or a government agency when working with your firm?

We realize that what we're doing is something totally new. As you know, there are plenty of tools for information/entity extraction, etc. But the way we focus on higher end concepts such as events and time we'd like to think is fairly unique. We've packaged all of this up into a hosted service that users can access with out even having to think about "entity extraction" or the like.

Building such a service certainly takes some fine tuning. We try to be very humble about that. And we have been most fortunate to build a solid group of customers who're very successful in using our tools. Quite encouraging!

How does an information retrieval engagement move through its life cycle?

Because we deliver a hosted service, we can listen to the customer and respond to each customer’s requirements. As a cloud or hosted service, we essentially manage the whole cycle. We add new sources on a continuous basis, new concepts that are extracted, new UI improvement, new API calls, etc.

Our commitment to innovation and system enhancements is a big part of the ethos of the company.

One challenge to those involved with squeezing useful elements from large volumes of content is the volume of content AND the rate of change in existing content objects. What does your firm provide to customers to help them deal with the problems of “big data”?

This is a great question. Of course, there is delay in information. When a barge with cobalt get stuck somewhere in Congo and the information eventually hits a trading floor in Chicago that information doesn't travel in milliseconds.

What we do is to tag information very, very carefully. For example, we add metatags that make explicit when we locate an item of data. We tag when that datum was published. We tag when we analyzed that datum. We also tag when we find it, when it was published, when we analyzed it, and what actual time point (past, present, future) to which the datum refers. The time precision is quite important. Time makes it possible for end users and modelers to deal with this important attribute.

At this stage in our technology’s capabilities, we're not trying to claim that we can beat someone like Reuters or Bloomberg at delivering a piece of news the fastest. But if you're interested in monitoring, for example, the co-incidence of an insider trade with a product recall we can probably beat most at that.

Another challenge, particularly in professional intelligence operations, is moving data from point A to point B; that is, information enters a system but it must be made available to an individual who needs that information or at least must know about the information. What does your firm offer licensees to address this issue of content "push", report generation, and personalization within a work flow?

Okay, good points. We've built and provided integrations to environments such as Google Spreadsheets, R, Spotfire, etc. We have loads and loads of ideas for how our data can hit productivity environments. I can’t reveal the details of what is coming from Recorded Future. I can ask you to take a close look at the enhancements to our cloud service that will become available in the near future?

Is that a “recorded future?”

Yes, the enhancements I referenced are a 0.999999 probability of becoming available. Our customers are quite vocal in their needs, and we are responding as you and I are talking today.

There has been a surge in interest in putting "everything" in a repository and then manipulating the indexes to the information in the repository. On the surface, this seems to be gaining traction because network resident information can "disappear" or become unavailable. What's your view of the repository versus non repository approach to content processing? What are the "hooks" between content processed by Recorded Future and a more traditional type of analytics system?

To be honest, at this stage we have really not worked at this at all. We are focused on Internet content and will be tackling other types of content in the near future.

No problem. I appreciate your focus. Let me ask about visualization. Visualization has been a great addition to briefings. On the other hand, visualization and other graphic eye candy can be a problem to those in stressful operational situations? What's your firm's approach to presenting "outputs" that end users can easily ingest?

I think the challenge is not so much whether visualization is good or not. I know that you and I can discuss at length ways to help an end user get the fact or insight needed. I think the key is that in some use cases users would like to explore/visualize/analyze very actively, whereas in other use cases users would like to just be alerted for interesting patterns or see a report that someone else has done. For example in Recorded Future anyone can share an analysis on Twitter or Facebook. That's the level of ease we want to provide for sharing.

Thank you for providing such interesting insights into Recorded Future’s capabilities. However, I am on the fence about the merging of retrieval within other applications. What's your take on the "new" method which some people describe as "search enabled applications"?

Our plan is not really at all to compete with someone like Autonomy, Endeca, or Exalead. We want to build great service which indexes "the Internet" and makes it available for analysis. We believe that will be very valuable for people in finance, government, marketing, sales, etc. Think about it perhaps as the next generation of business intelligence.

There seems to be a popular perception that the world will be doing computing via iPad devices and mobile phones. My concern is that serious computing infrastructures are needed and that users are "cut off" from access to more robust systems? How does Recorded Future Recorded Future see the computing world over the next 12 to 18 months?

You are right. There certainly is a massive thrust of data and applications moving into the cloud. We'd like to ride that wave. Business intelligence and search has been behind there. But it's happening now and we'll ride that wave. But I think you're right to be concerned that this might, in fact, cut off users from internal data and applications. We have some interesting strategies in mind to address this.

Put on your wizard hat. What are the three most significant technologies that you see affecting your search business?

The three trends affecting what we're doing are, first, the shift from on premises systems to the cloud. Data and tools are moving into hosted data centers aooooooooooond the pace is, in my opinion, accelerating.

Second, I am interested in the rapid uptake of scalable database technologies. These systems allow us to index very large amounts of data. Recorded Future’s innovations thrive on large volumes of data.

Third, I think HTML5 is important. That technology allows us to build very compelling, Web-based user experiences.

Where does a reader get more information about Recorded Future ?

I think I would suggest that anyone wanting more information visit http://www.recordedfuture.com. I also want to invite people to read our blogs such as http://blog.recordedfuture.com/ and http://www.analysisintelligence.com/. You can also email us at sales@recordedfuture.com.

ArnoldIT Comment

Recorded Future is going to be a disruptive company. The firm has a solid base of customers among governmental entities in the US and in Europe. With the support of Google, Recorded Future is going to find that interest among Google’s enterprise customers is a certainty. Like other companies offering next-generation technology, Recorded Future will have to continue to innovate and ward off competitive thrusts from giants like IBM, Oracle, and SAP. In addition, established players in analytics like i2 Ltd and the upstart Palantir will challenge Recorded Future in certain markets. Nevertheless, our view is that the future of Recorded Future is bright. This is a company to watch.

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 7:15 pm
by Searcher08
In addition, established players in analytics like i2 Ltd and the upstart Palantir will challenge Recorded Future in certain markets. Nevertheless, our view is that the future of Recorded Future is bright. This is a company to watch.

Years ago, when it looked like 9/11 Truth was going somewhere, I asked an online mate who was ex Special Branch in the UK what was the best software to do really industrial strength sleuthing - without hesitation he said i2.

http://www.i2group.com/us/products/anal ... s-notebook

RI would probably solve the JFK case over a long weekend with their software :lol2:

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 7:35 pm
by Wombaticus Rex
^^Actually, I am continually amazed how few data points exist online. The fact that Google Books exists in a wholly castrated state bothers me a lot -- opening those floodgates would radically increase the amount of actual information on the internet. There is a vast gulf separating information from mere content, which the internet has in kaleidoscopic abundance.

It would also be a good step to force all the academic archives currently behind paywalls onto the public net ASAP.

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 7:42 pm
by eyeno
Wombaticus Rex wrote:^^Actually, I am continually amazed how few data points exist online. The fact that Google Books exists in a wholly castrated state bothers me a lot -- opening those floodgates would radically increase the amount of actual information on the internet. There is a vast gulf separating information from mere content, which the internet has in kaleidoscopic abundance.

It would also be a good step to force all the academic archives currently behind paywalls onto the public net ASAP.



I would like to issue a wish. With your data mining prowess, I would appreciate it if you would start a thread named something similar to:

"all you ever wanted to know about online data mining for the ignorant"

or maybe

"data mining for dummies"

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 7:55 pm
by Harvey
Searcher08 wrote:In addition, established players in analytics like i2 Ltd and the upstart Palantir will challenge Recorded Future in certain markets. Nevertheless, our view is that the future of Recorded Future is bright. This is a company to watch.

Years ago, when it looked like 9/11 Truth was going somewhere, I asked an online mate who was ex Special Branch in the UK what was the best software to do really industrial strength sleuthing - without hesitation he said i2.

http://www.i2group.com/us/products/anal ... s-notebook

RI would probably solve the JFK case over a long weekend with their software :lol2:


I guess the question becomes, on many levels, what they found? Because it's absurd to think that they didn't already.

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Fri Aug 26, 2011 7:59 pm
by Wombaticus Rex
Not just faking humility to avoid work: I am really not the expert on this stuff. I actually have very little experience doing research for clients. I wish I could get people to pay me for that, but my work is limited to copywriting, coding and SEO horseshit. I've done competitive intel gigs but very little of that actually involves the internet. Most of the really powerful tools, I have never even touched because I've never worked for a client who had access. I am certainly not in any position to pay those fees myself.

I recommend this: http://www.searchlores.org/indexo.htm which is written by an actual expert.

Especially this: http://www.searchlores.org/deepweb_searching.htm

Single best tip: add this to any serious search you do: -".com"

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Sat Aug 27, 2011 12:26 pm
by Wombaticus Rex
A fascinating and pivotal read that I re-discovered this morning digging around the topics of this thread:

http://www.yorku.ca/aviseu/pdf%20files/ ... zation.pdf

1. Electricity, the third language technology

Indeed, the world is one single electronic grid, whether we consider the power grid,
the telephone, the audiovisual media or the internet. If the outage brings to mind
the practical importance of electricity, it should also cause us to reflect on its
psychological, cognitive and social roles. Electricity, the only medium without a
message (McLuhan 1964), is the third major language technology after speech and
writing. It is also the basis of knowledge societies. Thinking itself is partially
dependent upon an electrical activity. We have electricity in our bodies to drive our
central nervous system. So there is a certain kind of continuity between our
gestures and our tools, something that can be observed in children playing
videogames, or in a person typing at a computer, for example.


Obviously, a broadcast from the deep end of the pool, but there's a lot of relevant brainfood there for our Unspoken Thesis.

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Sat Aug 27, 2011 1:03 pm
by Wombaticus Rex
Image

I am also reminded of the Joshua-Michéle Ross series on O'Rielly a few years back...let's review.

Source: http://radar.oreilly.com/2009/05/captiv ... mmons.html

In January 2002 DARPA launched the Information Awareness Office. The mission was to, “ imagine, develop, apply, integrate, demonstrate and transition information technologies, components and prototype, closed-loop, information systems that will counter asymmetric threats by achieving total information awareness (emphasis added)” The notion of a government agency achieving total information awareness was too Orwellian to ignore. Under criticism that this “awareness” could quickly migrate to a mass surveillance system the program was defunded.

Fast-forward to last week and my near-purchase of Libbey Duratuff Gibralter Glasses (the perfect bourbon glass one might speculate). Over the course of the next few days I was peppered with exact-match ads for Libbey Duratuff glassware on several other websites; A small example of information awareness at work.

Personal data is the currency of Web 2.0. Knowing what we watch, buy, click, own, what we think, intend and ultimately do confers competitive advantage. Facebook possesses your social graph, your personal interests and your full profile (age, location, relationship status etc.) not to mention your daily (or hourly) answer to their persistent question, “what’s on your mind?”. Reviewing the “25 Surprising Things Google Knows About You” should give anyone pause. And it’s not just the Web 2.0 set. Credit Card Companies, Telcos, Insurance , Pharma… all are collecting vast stores of personal data. If you watch the trendline it is moving toward more data and more analytic capability - not less.

So why is it that we seem to have more comfort when the capacity for total information awareness lies with corporations as opposed to government? Experience shows that there is a very thin barrier between the two. To wit, the release of thousands of phone records to the U.S. government - and, conveniently, government immunity for those same corporations after the breach. Google and Yahoo! and Microsoft have all been accused of cooperating with the Chinese government to aid censorship and repression of free speech. What happens if/when we encounter the next version of the Bush administration that sees no problem abrogating civil rights in pursuit of “evildoers”?

What's more, when we deliver our personal information over to corporations we are giving this data over to an institution that is amoral. Companies are not yet structured to deliver moral or ethical results - they are encouraged to grow and deliver “shareholder value” (read money) which is a numb and narrow measure of value. Do I want my data to be managed by an amoral institution?

To be clear - I want the convenience and miracles that modern technology brings. I love the Internet and I am willing to give over lots of data in the trade. But I want two fundamental protections:

First, change the corporation. The structure of the corporation continues to be driven by 20th century hard goals of efficiency and scale - not by more complex measures of environmental sustainability, value creation and the commonweal. These are simply not adequately factored into any structural, organizational, incentive or taxation systems of business today. Profit and profit motive are fine - but hiding social and environmental costs is no longer acceptable. I want to deal with institutions capable of morality. This is no small task - but if we can build the Internet….

Second. We need a right to privacy that matches the 21st century reality. As a friend of mine likes to say, “privacy is now a responsibility - not a right.” While it is pithy (and perhaps true), the reason we grant rights - and laws to enforce those rights in society is the simple fact that people do not generally have the wherewithal to protect themselves from large, institutional interests. In the same way that regulatory structures are needed to keep a financial system in balance (alas even the Ayn Rand acolyte Greenspan finally agrees with this truism), we need new rights and regulations governing the use of our personal data - and simple sets of controls over who has access to it.

The true work of the 21st century lies not in refining our technology - this we will achieve without any political will. The work lies in re-imagining our institutions.


Of course, his first "solution" is such an obvious category error you can immediately tell he wasn't going through an editor. The challenge of building the internet was electrical engineering and physical logistics -- changing the nature of the corporation is an institutional crusade requiring a completely different skillset and strategy. Strategies, really.

Today on twitter, all the Big Thinkers are abuzz about the idea of "repurposing" Federal bureaucracy and I couldn't help but ask for examples of that being done successfully -- so far all the responses have been token "corporate turnaround" stories from the private sector. I can't tell if I'm blinded by my cynicism, or they're really that naive to the tremendous logistical gap between fixing IBM's management structure and turning around the US Federal Government...

Anyways, the final installment in Ross's series gets more meaty...

Image

Source: http://radar.oreilly.com/2009/05/the-di ... ticon.html

The Digital Panopticon

....

Bentham was left frustrated in his vision to build the Panopticon. But the concept endured - not just as a literal architecture for controlling physical subjects (there are many Panopticons that now bear Bentham’s stamp) - but as a metaphor for understanding the function of power in modern times. French philosopher Michel Foucault dedicated a whole section of his book Discipline and Punish to the significance of the Panopticon. His take was essentially this: The same mechanism at work in the Panopticon - making subjects totally visible to authority - leads to those subjects internalizing the norms of power. In Foucault’s words “…the major effect of the Panopticon; to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power. So to arrange things that the surveillance is permanent in its effects, even if it is discontinuous in its action; that the perfection of power should tend to render its actual exercise unnecessary” In short, under the possibility of total surveillance the inmate becomes self regulating.

The social technologies we see in use today are fundamentally panoptical - the architecture of participation is inherently an architecture of surveillance.

In the age of social networks we find ourselves coming under a vast grid of surveillance - of permanent visibility. The routine self-reporting of what we are doing, reading, thinking via status updates makes our every action and location visible to the crowd. This visibility has a normative effect on behavior (in other words we conform our behavior and/or our speech about that behavior when we know we are being observed).

In many cases we are opting into automated reporting structures (Google Lattitude, Loopt etc.) that detail our location at any given point in time. We are doing this in exchange for small conveniences (finding local sushi more quickly, gaining “ambient intimacy”) without ever considering the bargain that we are striking. In short, we are creating the ultimate Panopticon - with our data centrally housed in the cloud (see previous post on the Captivity of the Commons) - our every movement, and up-to-the-minute status is a matter of public record. In the same way that networked communications move us from a one to many broadcast model to a many to many - so we are seeing the move to a many-to-many surveillance model. A global community of voyeurs ceaselessly confessing to "What are you doing? (Twitter) or "What's on your mind? (Facebook)

Captivity of the Commons focused on the risks of corporate ownership of personal data. This post is concerned with how, as individuals, we have grown comfortable giving our information away; how our sense of privacy is changing under the small conveniences that disclosure brings. How our identity changes as an effect of constant self-disclosure. Many previous comments have rightly noted that privacy is often cultural -- if you don't expect it - there is no such thing as an infringement. Yet it is important to reckon with the changes we see occurring around us and argue what kind of a culture we wish to create (or contribute to).

Jacques Ellul’s book, Propaganda, had a thesis that was at once startling and obvious: Propaganda’s end goal is not to change your mind at any one point in time - but to create a changeable mind. Thus when invoked at the necessary time - humans could be manipulated into action. In the U.S. this language was expressed by catchphrases like, “communism in our backyard,” “enemies of freedom” or the current manufactured hysteria about Obama as a “socialist”.

Similarly the significance of status updates and location based services may not lie in the individual disclosure but in the significance of a culture that has become accustomed to constant disclosure.


Tech guys waking up to social conditioning implications of their own work is a beautiful thing, innit?

Re: Fascinating White Paper from Google/CIA Project

PostPosted: Sat Aug 27, 2011 1:28 pm
by 82_28
Wombaticus Rex wrote:^^Actually, I am continually amazed how few data points exist online. The fact that Google Books exists in a wholly castrated state bothers me a lot -- opening those floodgates would radically increase the amount of actual information on the internet. There is a vast gulf separating information from mere content, which the internet has in kaleidoscopic abundance.

It would also be a good step to force all the academic archives currently behind paywalls onto the public net ASAP.


Very well said and me too! New information is bunk. It's only those who have taken the time to archive the real info, the stuff that came before the "information age" and have it be free of charge that is worth jack shit. And this is because you use their own self searching capabilities and know how when it comes to drilling down. Google is quickly becoming more and more of a joke to me. I still avidly use it, but google sucks anymore.

The data points that do exist are mundane as hell. More on this later. Gotta run to work. . .