Questioning Consciousness

Moderators: Elvis, DrVolin, Jeff

Re: Questioning Consciousness

Postby DrEvil » Sun Sep 26, 2021 12:07 pm

Yeah, that's true, but it's still constrained to very narrow fields. You can train an AI to be really good at recognizing cats, but it will be shit at anything else. The lacking piece is generalized learning, but we're inching closer every day. DeepMind thinks they know how to do it with reinforcement learning, and Google's goal has always been to make an AGI.

The problem is if we ever create an actual, honest to God AI, will we be able to recognize it? You can always make the argument that it's just really good at mimicking thinking without actually thinking, even when you can't tell the difference between a person and the AI. There's no test for consciousness.

There's also the problem that current AI models are black boxes. We have no idea what's going on inside them once they're up and running. They just take in information and spit out results, and every now and then they do something unpredictable, and we have no idea why (Google set up an AI to manage cooling at one of their data centers a few years ago, and it worked swimmingly, including doing things that were completely counter-intuitive, like raising the temperature in certain parts of the building. The engineers said it acted like a living thing. Same thing happened with AlphaGo, when it created new strategies that human players had never considered). That's not a big issue with an AI that tags your photos, but it might be for an AI that handles flight control.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3972
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby Harvey » Sun Sep 26, 2021 6:49 pm

DrEvil » Sun Sep 26, 2021 5:07 pm wrote:The lacking piece is generalized learning


The lacking piece is meaning. In my view the transhumanist agenda was never intended to arrive at technological transcendence, that is, to make people out of machines but was always intended to achieve precisely the opposite - to make machines out of people.

The unconscious desire of transhumanism is to make order where order already exists.
And while we spoke of many things, fools and kings
This he said to me
"The greatest thing
You'll ever learn
Is just to love
And be loved
In return"


Eden Ahbez
User avatar
Harvey
 
Posts: 4165
Joined: Mon May 09, 2011 4:49 am
Blog: View Blog (20)

Re: Questioning Consciousness

Postby DrEvil » Sun Sep 26, 2021 11:20 pm

Harvey » Mon Sep 27, 2021 12:49 am wrote:
DrEvil » Sun Sep 26, 2021 5:07 pm wrote:The lacking piece is generalized learning


The lacking piece is meaning. In my view the transhumanist agenda was never intended to arrive at technological transcendence, that is, to make people out of machines but was always intended to achieve precisely the opposite - to make machines out of people.

The unconscious desire of transhumanism is to make order where order already exists.


I think it's more to make a different order, because the current order is pretty fucking shit if you think about it. You're born, you live for a few decades, probably with a big helping of suffering along the way, then you watch your loved ones die, or you beat them to it and they get to watch you die, and then it's over. You no longer exist and never will again, and the universe doesn't give a flying fuck. The end.

Most of it is just a different type of religion, with the small caveat that some of the things they preach might one day actually be possible. Some of those things I'm all in favor of (like fighting aging or expanding our abilities), but a lot of what they say is just delusional horseshit, like Kurzweil wanting to simulate his dead father, or AI gods magically curing all ills and ushering in utopia (although that's singularitarianism, not transhumanism, but they're pretty interchangeable).

To me at least it's infuriating that consciousness can exist in our reality at all, only to be snuffed out after a short while. The best you can hope for is some good times along the way and then it's bye-bye, off to oblivion. In that sense transhumanism is the one religion with at least a tiny shred of hope of fixing that. Probably not gonna happen, but it's at least conceivable. It's something you can actively work on and see tangible improvements, unlike all the traditional religions, which have been preaching the same thing for millennia with fuck-all to show for themselves.

I think that's what attracts a lot of the Silicon Valley crowd. It's a religion where you don't have to take things on faith, you can actually spend your billions on making the tenets of your faith more likely to come true. I don't think that's a good thing, because most tech-bros are insufferable assholes, but I do subscribe to the idea of the slow singularity. Not abrupt, revolutionary change with boot-strapping AIs, but more gradual changes that add up to revolutionary change at a pace we can cope with, or in other words, plain old progress.

We're already in the early stages, with the entire world becoming connected within a decade. Next up the entire world will merge with the artificial through augmented reality, and then that merging will over time become greater and greater until the two are indistinguishable. Along the way more and more of our ills will become possible to cure, and our lifespan will likely expand with it. The end result will hopefully be a reality where the inhabitants are happy and live good lives, but I doubt it will seem like a happy, good life from our perspective, but that's irrelevant, because we're not the ones living there.

/rant
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3972
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby conniption » Tue Feb 22, 2022 4:21 am

Check it out -

Image
http://www.unariunwisdom.com/

This web site is an online study group dedicated to the discussion
and study of the works of Dr. Ernest L. Norman.


UNARIUN WISDOM – Blending of Science and Spirit

“To attempt to describe the UN.AR.I.U.S. Science would be like trying to place all the visible and invisible universes into the proverbial goldfish bowl, for this Science does encompass all known and unknown elements and factors of life and the creative principles which make all things possible. For practical purposes, however and inasmuch as any person would be completely overwhelmed with even a glimpse into the Infinite Cosmos, he must therefore begin – as Kung Fu said: This long journey into Infinity begins with the first step – and this first step is the teaching course of UN.AR.I.U.S. Science.

It must be properly understood that unlike all other existing religions, cultisms, philosophies, metaphysics and so-called mind sciences, et cetera, the Unariun teachings is an exact science. In our present day electronic and atomic technocracy, the man of science has struck many close parallels with the basic principles of life...

http://www.unariunwisdom.com/
conniption
 
Posts: 2480
Joined: Sun Nov 11, 2012 10:01 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby DrEvil » Mon Jun 20, 2022 3:07 pm

https://www.theatlantic.com/technology/ ... ss/661329/

Artificial Consciousness Is Boring

The reality of AI is something harder to comprehend.
By Stephen Marche June 19, 2022, 8 AM ET

Last week, Google put one of its engineers on administrative leave after he claimed to have encountered machine sentience on a dialogue agent named LaMDA. Because machine sentience is a staple of the movies, and because the dream of artificial personhood is as old as science itself, the story went viral, gathering far more attention than pretty much any story about natural-language processing (NLP) has ever received. That’s a shame. The notion that LaMDA is sentient is nonsense: LaMDA is no more conscious than a pocket calculator. More importantly, the silly fantasy of machine sentience has once again been allowed to dominate the artificial-intelligence conversation when much stranger and richer, and more potentially dangerous and beautiful, developments are under way.

The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint. LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking with a person. Utterly convincing chatbots are far from groundbreaking tech at this point. Programs such as Project December are already capable of re-creating dead loved ones using NLP. But those simulations are no more alive than a photograph of your dead great-grandfather is.

Already, models exist that are more powerful and mystifying than LaMDA. LaMDA operates on up to 137 billion parameters, which are, speaking broadly, the patterns in language that a transformer-based NLP uses to create meaningful text prediction. Recently I spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of hundreds of separate tasks without being specifically trained to do them. It is a true artificial general intelligence, insofar as it can apply itself to different intellectual tasks without specific training “out of the box,” as it were.

Some of these tasks are obviously useful and potentially transformative. According to the engineers—and, to be clear, I did not see PaLM in action myself, because it is not a product—if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes. Then there’s the function that has startled its own developers, and which requires a certain distance and intellectual coolness not to freak out over. PaLM can reason. Or, to be more precise—and precision very much matters here—PaLM can perform reason.

The method by which PaLM reasons is called “chain-of-thought prompting.” Sharan Narang, one of the engineers leading the development of PaLM, told me that large language models have never been very good at making logical leaps unless explicitly trained to do so. Giving a large language model the answer to a math problem and then asking it to replicate the means of solving that math problem tends not to work. But in chain-of-thought prompting, you explain the method of getting the answer instead of giving the answer itself. The approach is closer to teaching children than programming machines. “If you just told them the answer is 11, they would be confused. But if you broke it down, they do better,” Narang said.

Google illustrates the process in the following image:

Image

Adding to the general weirdness of this property is the fact that Google’s engineers themselves do not understand how or why PaLM is capable of this function. The difference between PaLM and other models could be the brute computational power at play. It could be the fact that only 78 percent of the language PaLM was trained on is English, thus broadening the meanings available to PaLM as opposed to other large language models, such as GPT-3. Or it could be the fact that the engineers changed the way that they tokenize mathematical data in the inputs. The engineers have their guesses, but they themselves don’t feel that their guesses are better than anybody else’s. Put simply, PaLM “has demonstrated capabilities that we have not seen before,” Aakanksha Chowdhery, a member of the PaLM team who is as close as any engineer to understanding PaLM, told me.

None of this has anything to do with artificial consciousness, of course. “I don’t anthropomorphize,” Chowdhery said bluntly. “We are simply predicting language.” Artificial consciousness is a remote dream that remains firmly entrenched in science fiction, because we have no idea what human consciousness is; there is no functioning falsifiable thesis of consciousness, just a bunch of vague notions. And if there is no way to test for consciousness, there is no way to program it. You can ask an algorithm to do only what you tell it to do. All that we can come up with to compare machines with humans are little games, such as Turing’s imitation game, that ultimately prove nothing.

Where we’ve arrived instead is somewhere more foreign than artificial consciousness. In a strange way, a program like PaLM would be easier to comprehend if it simply were sentient. We at least know what the experience of consciousness entails. All of PaLM’s functions that I’ve described so far come from nothing more than text prediction. What word makes sense next? That’s it. That’s all. Why would that function result in such enormous leaps in the capacity to make meaning? This technology works by substrata that underlie not just all language but all meaning (or is there a difference?), and these substrata are fundamentally mysterious. PaLM may possess modalities that transcend our understanding. What does PaLM understand that we don’t know how to ask it?

Using a word like understand is fraught at this juncture. One problem in grappling with the reality of NLP is the AI-hype machine, which, like everything in Silicon Valley, oversells itself. Google, in its promotional materials, claims that PaLM demonstrates “impressive natural language understanding.” But what does the word understanding mean in this context? I am of two minds myself: On the one hand, PaLM and other large language models are capable of understanding in the sense that if you tell them something, its meaning registers. On the other hand, this is nothing at all like human understanding. “I find our language is not good at expressing these things,” Zoubin Ghahramani, the vice president of research at Google, told me. “We have words for mapping meaning between sentences and objects, and the words that we use are words like understanding. The problem is that, in a narrow sense, you could say these systems understand just like a calculator understands addition, and in a deeper sense they don’t understand. We have to take these words with a grain of salt.” Needless to say, Twitter conversations and the viral information network in general are not particularly good at taking things with a grain of salt.

Ghahramani is enthusiastic about the unsettling unknown of all of this. He has been working in artificial intelligence for 30 years, but told me that right now is “the most exciting time to be in the field” exactly because of “the rate at which we are surprised by the technology.” He sees huge potential for AI as a tool in use cases where humans are frankly very bad at things but computers and AI systems are very good at them. “We tend to think about intelligence in a very human-centric way, and that leads us to all sorts of problems,” Ghahramani said. “One is that we anthropomorphize technologies that are dumb statistical-pattern matchers. Another problem is we gravitate towards trying to mimic human abilities rather than complementing human abilities.” Humans are not built to find the meaning in genomic sequences, for example, but large language models may be. Large language models can find meaning in places where we can find only chaos.

Even so, enormous social and political dangers are at play here, alongside still hard-to-fathom possibilities for beauty. Large language models do not produce consciousness but they do produce convincing imitations of consciousness, which are only going to improve drastically, and will continue to confuse people. When even a Google engineer can’t tell the difference between a dialogue agent and a real person, what hope is there going to be when this stuff reaches the general public? Unlike machine sentience, these questions are real. Answering them will require unprecedented collaboration between humanists and technologists. The very nature of meaning is at stake.

So, no, Google does not have an artificial consciousness. Instead, it is building enormously powerful large language systems with the ultimate goal, as Narang said, “to enable one model that can generalize across millions of tasks and ingest data across multiple modalities.” Frankly, it’s enough to worry about without the science-fiction robots playing on the screens in our head. Google has no plans to turn PaLM into a product. “We shouldn’t get ahead of ourselves in terms of the capabilities,” Ghahramani said. “We need to approach all of this technology in a cautious and skeptical way.” Artificial intelligence, particularly the AI derived from deep learning, tends to rise rapidly through periods of shocking development, and then stall out. (See self-driving cars, medical imaging, etc.) When the leaps come, though, they come hard and fast and in unexpected ways. Gharamani told me that we need to achieve these leaps safely. He’s right. We’re talking about a generalized-meaning machine here: It would be good to be careful.

The fantasy of sentience through artificial intelligence is not just wrong; it’s boring. It’s the dream of innovation by way of received ideas, the future for people whose minds never escaped the spell of 1930s science-fiction serials. The questions forced on us by the latest AI technology are the most profound and the most simple; they are questions that, as ever, we are completely unprepared to face. I worry that human beings may simply not have the intelligence to deal with the fallout from artificial intelligence. The line between our language and the language of the machines is blurring, and our capacity to understand the distinction is dissolving inside the blur.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3972
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby Iamwhomiam » Mon Jun 20, 2022 6:23 pm

I found this interview with Donald Hoffman interesting. Even though I'm unfamiliar with him, I am familiar with some ideas he shares about our perception of reality. I hope you'll find value in it, as I have.



https://www.youtube.com/watch?v=reYdQYZ9Rj4
User avatar
Iamwhomiam
 
Posts: 6572
Joined: Thu Sep 27, 2007 2:47 am
Blog: View Blog (0)

Re: Questioning Consciousness

Postby DrEvil » Tue Jun 21, 2022 10:17 am

Synchronicity! I was watching a talk by him just yesterday.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3972
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby Belligerent Savant » Tue Jun 21, 2022 10:29 am

Belligerent Savant » Sat Mar 23, 2019 11:07 am wrote:.

Ah yes, Hoffman. There's stuff from Hoffman within RI's archives, which I'll endeavor to locate and share when I'm in front of my laptop (rather than this irksome mobile device).
Indeed, Elfis included a vid clip of a Hoffman talk in page 1 of this thread.

His Desktop analogy is the one that gets the rounds on the interwebs:

Hoffman's computer analogy is that physical space is like the desktop and that objects in it are like desktop icons, which are produced by the graphical user interface (GUI). Our senses, he says, form a biological user interface—a gooey GUI—between our brain and the outside world, transducing physical stimuli such as photons of light into neural impulses processed by the visual cortex as things in the environment. GUIs are useful because you don't need to know what is inside computers and brains. You just need to know how to interact with the interface well enough to accomplish your task. Adaptive function, not veridical perception, is what is important.

Hoffman's holotype is the Australian jewel beetle Julodimorpha bakewelli. Females are large, shiny, brown and dimpled. So, too, are discarded beer bottles dubbed “stubbies,” and males will mount them until they die by heat, starvation or ants. The species was on the brink of extinction because its senses and brain were designed by natural selection not to perceive reality (it's a beer bottle, you idiot!) but to mate with anything big, brown, shiny and dimply.


The author of the excerpt is somewhat of a detractor, however:


ITP is well worth serious consideration and testing, but I have my doubts. First, how could a more accurate perception of reality not be adaptive? Hoffman's answer is that evolution gave us an interface to hide the underlying reality because, for example, you don't need to know how neurons create images of snakes; you just need to jump out of the way of the snake icon. But how did the icon come to look like a snake in the first place? Natural selection. And why did some nonpoisonous snakes evolve to mimic poisonous species? Because predators avoid real poisonous snakes. Mimicry works only if there is an objective reality to mimic.

Hoffman has claimed that “a rock is an interface icon, not a constituent of objective reality.” But a real rock chipped into an arrow point and thrown at a four-legged meal works even if you don't know physics and calculus. Is that not veridical perception with adaptive significance?

As for jewel beetles, stubbies are what ethologists call supernormal stimuli, which mimic objects that organisms evolved to respond to and elicit a stronger response in doing so, such as (for some people) silicone breast implants in women and testosterone-enhanced bodybuilding in men. Supernormal stimuli operate only because evolution designed us to respond to normal stimuli, which must be accurately portrayed by our senses to our brain to work.

Hoffman says that perception is species-specific and that we should take predators seriously but not literally. Yes, a dolphin's icon for “shark” no doubt looks different than a human's, but there really are sharks, and they really do have powerful tails on one end and a mouthful of teeth on the other end, and that is true no matter how your sensory system works.

Also, computer simulations are useful for modeling how evolution might have happened, but a real-world test of ITP would be to determine if most biological sensory interfaces create icons that resemble reality or distort it. I'm betting on reality. Data will tell.

Finally, why present this problem as an either-or choice between fitness and truth? Adaptations depend in large part on a relatively accurate model of reality. The fact that science progresses toward, say, eradicating diseases and landing spacecraft on Mars must mean that our perceptions of reality are growing ever closer to the truth, even if it is with a small “t.”


https://www.scientificamerican.com/arti ... eally-are/


Of course, none of this rules out the Holographic theory of our Universe..
User avatar
Belligerent Savant
 
Posts: 5215
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Questioning Consciousness

Postby DrEvil » Tue Jun 21, 2022 3:08 pm

I lean towards there being a fairly strong correlation between the user interface representations we experience and objective reality, but I also think we're extremely selective about what parts of reality we take in. Some of it through physical limitations, like only being able to see a narrow band of the electromagnetic spectrum, or hear only certain frequency ranges, and some of it being edited out by our brains because it decides it's not important. There's a ton of icons the computer just never shows us. We're basically just running the pared down trial version.

It's amazing what our brains will ignore if it thinks it's not relevant, like the gorilla walking across the stage when we're preoccupied with counting how often the ball is thrown. Half of us will literally not see a freakin' gorilla standing right in front of us just because our brains say it's not important right now.

Another example:


https://www.youtube.com/watch?v=ubNF9QNEQLA
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3972
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby Joe Hillshoist » Thu Jun 23, 2022 10:07 pm

So how many did you get?

Watching that video messed with my head. I didn't notice the changes as such until I saw the coat change, but I did notice things were weird. Probably picked about 10.
Joe Hillshoist
 
Posts: 10594
Joined: Mon Jun 12, 2006 10:45 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby DrEvil » Fri Jun 24, 2022 1:39 pm

I didn't count, but on my first watch the only thing I distinctly remember was the coat because it was so obvious (I'm usually terrible at these things). It makes sense though; our brains can't pay attention to everything all the time, it has to prioritize, but the more complex the environment the higher the chance it prioritizes away relevant information, and this video was designed specifically to showcase that.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3972
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby Joe Hillshoist » Sat Jun 25, 2022 9:42 am

The coat triggered me and I suddenly clicked to it. Suddenly the story ... ie what was going on no longer mattered and everything that seemed odd grabbed my attention.

Which begs the question (in relation to your comment) "What's relevant information?"

For me the definition of "relevant information" changed as soon as the coat did. It wasn't about understanding what was going on, now I was trying to work out how I'd been tricked.
Joe Hillshoist
 
Posts: 10594
Joined: Mon Jun 12, 2006 10:45 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby DrEvil » Sat Jun 25, 2022 10:08 am

My guess (heavy emphasis on "guess") is that when you noticed the coat you triggered some of the same circuitry that would trigger by noticing a rustling in the tall grass. The "you're about to get eaten by a tiger, pay attention" circuitry. Changes means uncertainty, and uncertainty means things can be dangerous, and you pay attention to potentially dangerous things and try to work out what's going on.

If you subscribe to the free energy principle this is an excellent example of it in action. The brain wants to minimize surprise to get as accurate as possible predictions about its surroundings. If the brain starts throwing out red flags it immediately tries to correct the internal model by working out why things are changing so it can better predict them in the future.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3972
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby Joe Hillshoist » Sun Jun 26, 2022 7:58 am

DrEvil » 26 Jun 2022 00:08 wrote:My guess (heavy emphasis on "guess") is that when you noticed the coat you triggered some of the same circuitry that would trigger by noticing a rustling in the tall grass. The "you're about to get eaten by a tiger, pay attention" circuitry. Changes means uncertainty, and uncertainty means things can be dangerous, and you pay attention to potentially dangerous things and try to work out what's going on.

If you subscribe to the free energy principle this is an excellent example of it in action. The brain wants to minimize surprise to get as accurate as possible predictions about its surroundings. If the brain starts throwing out red flags it immediately tries to correct the internal model by working out why things are changing so it can better predict them in the future.


That first paragraph is definitely part of it.

but ... okay so while I was following the narrative I was interpreting the situation in a particular way and had targets for my attention and goals for the situation - understanding it and then trying to beat it. (I'm a very competitive person deep down.)

The coat was a trigger point because the narrative became irrelevant when I noticed the coat changed. It was the same sort of distraction the people with the balls are in the gorilla video. So before that point I didn't really know what i was looking for, just that I was looking, and afterward I knew I was looking for changes in the image and then I started twigging to them.

Its a situational awareness thing too. I'd normally expect myself to be sitting in that room not listening to the detective droning on but wondering why shit was changing every time I looked around. I reckon I have some level of ADHD. So in real life I'd back myself to pick up on the trickery in that scene faster because I'm easily distracted and have trouble concentrating on things. (If i was in that room with a camera's eye view and no other sensory input.) Well pinpoint things like pompous people talking. I like to be aware of my surroundings and how they are changing.

These vids are a form of magic trick. They use directed attention to make a point about how vulnerable we are to directed attention. LOL
Joe Hillshoist
 
Posts: 10594
Joined: Mon Jun 12, 2006 10:45 pm
Blog: View Blog (0)

Re: Questioning Consciousness

Postby Belligerent Savant » Sat May 27, 2023 1:35 pm

.

Per the convo on the latest page of the Limits of Science thread: viewtopic.php?f=8&t=42212&start=255

Calls to mind an author I haven't read in some time and will need to revisit.

Jorge Luis Borges.

https://en.wikipedia.org/wiki/Jorge_Luis_Borges
Jorge Francisco Isidoro Luis Borges Acevedo (/ˈbɔːrhɛs/ BOR-hess,[2] Spanish: [ˈxoɾxe ˈlwis ˈβoɾxes] (listen); 24 August 1899 – 14 June 1986) was an Argentine short-story writer, essayist, poet and translator, as well as a key figure in Spanish-language and international literature. His best-known books, Ficciones (Fictions) and El Aleph (The Aleph), published in the 1940s, are collections of short stories exploring themes of dreams, labyrinths, chance, infinity, archives, mirrors, fictional writers and mythology.[3] Borges's works have contributed to philosophical literature and the fantasy genre, and have had a major influence on the magic realist movement in 20th century Latin American literature.[4]

Born in Buenos Aires, Borges later moved with his family to Switzerland in 1914, where he studied at the Collège de Genève. The family travelled widely in Europe, including Spain. On his return to Argentina in 1921, Borges began publishing his poems and essays in surrealist literary journals. He also worked as a librarian and public lecturer.[5] In 1955, he was appointed director of the National Public Library and professor of English Literature at the University of Buenos Aires. He became completely blind by the age of 55. Scholars have suggested that his progressive blindness helped him to create innovative literary symbols through imagination.[Note 1] By the 1960s, his work was translated and published widely in the United States and Europe. Borges himself was fluent in several languages.


Some of his thoughts are expressed here:

https://bigother.com/2022/08/24/jorge-l ... nd-more-2/

Jorge Luis Borges on Reality, Writing, Literature, and More


Happy birthday, Jorge Luis Borges! Here are some quotes from the writer:

“Reality is not always probable, or likely.”

“I have always imagined that Paradise will be a kind of library.”

“Before I ever wrote a single line, I knew, in some mysterious and therefore unequivocal way, that I was destined for literature. What I didn’t realize at first is that besides being destined to be a reader, I was also destined to be a writer, and I don’t think one is less important than the other.”

“Any time something is written against me, I not only share the sentiment but feel I could do the job far better myself. Perhaps I should advise would-be enemies to send me their grievances beforehand, with full assurance that they will receive my every aid and support. I have even secretly longed to write, under a pen name, a merciless tirade against myself.”

“Literature is not exhaustible, for the sufficient and simple reason that a single book is not. A book is not an isolated entity: it is a narration, an axis of innumerable narrations. One literature differs from another, either before or after it, not so much because of the text as for the manner in which it is read.”

“A book is more than a verbal structure or series of verbal structures; it is the dialogue it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory. A book is not an isolated being: it is a relationship, an axis of innumerable relationships.”

“In the critic’s vocabulary, the word ‘precursor’ is indispensable, but it should be cleansed of all connotations of polemic or rivalry. The fact is that every writer creates his own precursors. His work modifies our conception of the past, as it will modify the future.”

“Music, states of happiness, mythology, faces belabored by time, certain twilights and certain places try to tell us something, or have said something we should not have missed, or are about to say something; this imminence of a revelation which does not occur is, perhaps, the aesthetic phenomenon.”

“That history should have imitated history was already sufficiently marvelous; that history should imitate literature is inconceivable….”

“I foresee that man will resign himself each day to more atrocious undertakings; soon there will be no one but warriors and brigands; I give them this counsel: The author of an atrocious undertaking ought to imagine that he has already accomplished it, ought to impose upon himself a future as irrevocable as the past.”

“A writer—and, I believe, generally all persons—must think that whatever happens to him or her is a resource. All things have been given to us for a purpose, and an artist must feel this more intensely. All that happens to us, including our humiliations, our misfortunes, our embarrassments, all is given to us as raw material, as clay, so that we may shape our art.”

“I am not sure that I exist, actually. I am all the writers that I have read, all the people that I have met, all the women that I have loved; all the cities that I have visited, all my ancestors.”

“As I think of the many myths, there is one that is very harmful, and that is the myth of countries. I mean, why should I think of myself as being an Argentine, and not a Chilean, and not an Uruguayan. I don’t know really. All of those myths that we impose on ourselves—and they make for hatred, for war, for enmity—are very harmful. Well, I suppose in the long run, governments and countries will die out and we’ll be just, well, cosmopolitans.”

“Doubt is one of the names of intelligence.”

“Writing is nothing more than a guided dream.”

“Every novel is an ideal plane inserted into the realm of reality.”

“Years of solitude had taught him that, in one’s memory, all days tend to be the same, but that there is not a day, not even in jail or in the hospital, which does not bring surprises, which is not a translucent network of minimal surprises.”

“Any life, however long and complicated it may be, actually consists of a single moment—the moment when a man knows forever more who he is.”

“The original is unfaithful to the translation.”

“Dictatorships foster oppression, dictatorships foster servitude, dictatorships foster cruelty; more abominable is the fact that they foster idiocy.”


With respect to the concept of a moment in time seemingly stretched beyond its apparent length (we all have experienced this in turns -- in sports it's called getting in "the zone", etc.), it calls to mind the following short story, which was also made into a short B&W film (an abbrevited version of it, below).

https://americanliterature.com/author/a ... eek-bridge

An Occurrence at Owl Creek Bridge

Analysis

The real Owl Creek Bridge is in Tennessee; Bierce, who personally assisted in three military executions during his time as a soldier, likely changed the setting to northern Alabama because the actual bridge did not have a railroad near it at the time the story is set.[5]

The story explores the concept of "dying with dignity". The story shows the reader that the perception of "dignity" provides no mitigation for the deaths that occur in warfare. It further demonstrates psychological escape right before death. Farquhar experiences an intense delusion to distract him from his inevitable death. The moment of horror that the readers experience at the end of the piece, when they realize that he dies, reflects the distortion of reality that Farquhar encounters.[6]

It is not only the narrator who experiences the story but also the readers themselves. As he himself once put it, Bierce detested "bad readers—readers who, lacking the habit of analysis, lack also the faculty of discrimination, and take whatever is put before them, with the broad, blind catholicity of a slop-fed conscience of a parlor pig".[7] Farquhar was duped by a Federal scout—and cursory readers on their part are successfully duped by the author who makes them think they are witnessing Farquhar's lucky escape from the gallows. Instead, they only witness the hallucination of such an escape taking place in the character's unconscious mind which is governed by the instinct of self-preservation.

Influence

The plot device of a long period of subjective time passing in an instant, such as the imagined experiences of Farquhar while falling, has been explored by several authors.[8] An early literary antecedent appears in the Tang dynasty tale, The Governor of Nanke, by Li Gongzuo. Another medieval antecedent is Don Juan Manuel's Tales of Count Lucanor, Chapter XII (c. 1335), "Of that which happened to a Dean of Santiago, with Don Illan, the Magician, who lived at Toledo", in which a life happens in an instant.[9][10] Charles Dickens's essay "A Visit to Newgate" wherein a man dreams he has escaped his death sentence has been speculated as a possible source for the story.[11] Bierce's story, in turn, may have influenced "The Snows of Kilimanjaro" by Ernest Hemingway and Pincher Martin by William Golding.[5]

https://en.wikipedia.org/wiki/An_Occurr ... eek_Bridge

User avatar
Belligerent Savant
 
Posts: 5215
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 41 guests