Followers

Wednesday, March 20, 2013

Like Robot, Like Human


Dear Friends,


Be Well.
David



Written on MARCH 15, 2013 AT 7:51 AM by JTOZER



Filed under FEATUREDROBOTS{NO COMMENTS}

I think it goes without saying that humans are flawed.
<pause for dramatic effect>
I know.  It’s a shocker.

So when we think of the things humans are capable of doing, there’s always a margin of error, isn’t there?  There’s always some bell curve that factors in because we know that we’ve got our short comings.  We’re not perfect.

Which is weird, since we are constantly asking the machines we build to be.  Especially the ones that awe and fascinate us the most.

Yeah, I’m talking about robots.

Up until now, the concept of a perfectly constructed robot was just that; a concept.  
Such a KIND face... (Cyberman, Copyright BBC)

Such a KIND face… (Cyberman, Copyright BBC)

Our movies and video games tend to depict robots as these inhuman goofy types, or seething, wrathful, intrinsically flawed things that either take on far too many human traits (Cylons) or not NEARLY enough (Cybermen).  In any case, our creative little minds tend to presume that robots are going to lean to the extremes.

But that’s just fun fiction.
However…

What if I told you that there was a process being developed that allowed scientists to implant a very human like thinking process into a very non-human robot brain?  Would you panic?  Because if so, I’d stop reading now.  And maybe seek out some calming tea.  Or professional help, depending on the severity.

Because it’s really happening, folks, and it’s going to change the way we think about Artificial Intelligence in a number of ways.

It’s called the Adaptive Character of Thought-Rational architecture, or ACT-R, courtesy of the Naval Research Laboratory (NRL).  So what does it do?

According to the recently released White Paper, a cognitive architecture is a set of computational modules that, working together, strive to produce human-level intelligence.

I’m just going to let that sink in for a minute there.

“But wait,” I hear you saying, “didn’t you start off this blog talking about how humans are flawed?”

Yes.  I did.  That’s what makes this all the more exciting.  They’re not trying to create the perfect, godlike deathbots of SciFi lore and repute.  Rather, they’re creating a synthetic version of people, so to speak.
No, wait!  Don’t panic.  Let me explain.

Thinking like a person means thinking imperfectly.  We remember things strangely.  Our memories degrade over time.  We let our emotions guide us.  Humans are driven by patterns and associations and experience over facts and deductive reasoning.

As it turns out, that’s exactly what these scientists are trying to capture.

The Soar architecture uses a modest set of building blocks to achieve human intelligence, including different types of memories (procedural, semantic, episodic) and different types of learning (reinforcement, chunking, semantic learning, episodic learning).

Learning is the key point there.  Not retaining information in a database, but actually learning.

These scientists are using ACT-R and ACT-R/E (Adaptive Character of Thought-Rational/Embodied (ACT-R/E) architecture) to build better, more comprehensive models of human cognition and leverage these models to improve the robot’s ability to interact with humans.  So why is this architecture so unique?

Because it’s designed to model human mentality by placing an emphasis on the limitations of human cognition.

These robots are trying to “get us” down at our level.  Well, that’s an interesting idea.  But before you act insulted, consider this: the argument is that robots who understand people are, ultimately, better teammates and more natural computational agents.  I guess they have to be able to think like us in order to be efficient and productive for us.

Not to get too philosophical on you, but what does it really mean to think like a human?

It all comes down to how we remember things.

For example, say you meet someone for the first time at a party.  They tell you their name, and if you aren’t completely disinterested in them you will likely try to remember it.  When a person remembers something, they do so by using a series of patterns.  Your mind will try to tie the new information (the name) to defining factors (face, voice, clothing, etc).

When you see this person again, you try to use certain trigger cues.  You see the hair, or the face or smell the perfume and your brain tries to tie the new information (the name) to those things.  Priming from contextual clues could provide the boost you need in memory activation, and the earlier rehearsal of associating those things together would likely be enough for you to remember the name of this person.  Ideally.

Then again, we’re not perfect.  You might end up calling them by the wrong name a few more times before it sticks.

Anyway, the ACT-R works insomuch the same concept.  When the robot (or model, as they call it) is introduced to new information like this, it uses a similar structured pattern to remember things.  So this information is not just being dumped into a memory bank as raw data to be regurgitated on command.

Rather, it becomes a piece of information that’s associated with other things.

Octavia and a human escort (photo courtesy of the Naval Research Lab)
Octavia and a human escort (photo courtesy of the Naval Research Lab)


Octavia and a human escort (photo courtesy of the Naval Research Lab)

At a high level, ACT-R is a hybrid symbolic/subsymbolic production-based system.  That means everything is connected to everything else in order to create a memory.

How do they do this?  By using a system called Specialized Egocentrically Coordinated Spaces, or SECS.  This enables human-like, cognitively plausible spatial reasoning.

This architecture is more than just retaining information as it comes in.  As we all know, our bodies tend to function as a whole; that is, memory retention is often a result of the sum of our parts.

The ACT-R/E model is designed to act as a consumer of visual information provided by external visual systems.  Senses – like sight, sound, environment – all play a part in how we absorb and interpret information around us.  This architecture wants the robot to get that full memory-making experience as well.

One of the recent threads in cognitive science has been embodied, or grounded, cognition.

The focus has been on showing that the body has a major role in shaping the mind.  When the motor and visual modules participate fully in the spreading of contextual activation, it is possible for a robot to learn which objects are best grasped with which motor commands.

Basically, these robots have the capacity to “understand” all their working parts, and those parts can work together to form information.  So if it talks like a human and thinks like a human, that doesn’t mean it is a human.

Speaking of us living, breathing specimens…

There are some things about these robots that deviate from the standard human procedure.  Things like fatigue, emotional instability, unpredictability, sleepiness, weepiness, derpiness, they’re all intrinsically human aspects.  Aspects that robots have no real reason to contend with, though that hasn’t stopped some SciFi writers from exploring the possibility of having depressed, mopey robots.


Anyway, that doesn’t mean these robots cannot be taught how to approach humans by understanding what makes them so crazy *ahem* interesting.  The high-level goal behind this is to give robots a deep understanding of how people think at the process level in order to make them better teammates.

They’re doing this by equipping robots with the functionality to understand human behavior – like right vs wrong – and use that information to act accordingly.  Skeptical?  Well, so was I.  I mean, how does a robot know the difference between right and wrong when philosophers have been making a living debating that very idea for centuries?

Turns out, in this case it’s more of a holistic approach to situation and crisis.  Noticing how humans tend to make mistakes in predictable ways, for example, can set a standard, or watching how their eyes move when they retrieve memories.

By developing robots that further understand how people think – including errors – they can leverage these models as tools for robots to use as they encounter humans in the world.

For example, the scientists put a robot to work on a serious project: playing hide-and-seek.  Given the fact that the ATC-R is designed to learn and understand, the robot was able to grasp the concept of the game fairly quickly.  The model was in fact able to mimic the outward behavior of the person, perfectly matching the hiding behavior.

That sounds small, but it’s really a big, big deal.  The robot was also able to play a credible game of hide and seek against a human.  Think about that.

Don’t believe me?  See for yourself:


Just when you thought you’ve seen it all, eh?  It’s like watching the early stages of robot evolution take place.
This architecture is designed with a Theory of the Mind (ToM) concept.  That is, the ability to understand beliefs, desires, and intentions of others.  So why give the robots this empathetic concept?  ToM is used to improve the robot’s ability to interact with people.  This is pertinent because research in psychology has shown that without ToM, people can be severely impaired in their abilities to interact naturally with others.  Apparently, the same goes for robots.

Simply put, robots are a little freaky when they’re disregarding of these things.

So why all of this, you wonder?  Why give robots the ability to think like humans, consider their intentions, and learn to play well with us?  Well, why else do you train?  For the mission.  These robots are being designed to be good teammates to people.  To help them.  To perform missions.  Just like us, they are given a task – like fighting fires for example – and they need to be the best equipped to complete that task to the best of their ability.

In this case, learning how to help humans means having a better robo-understanding of them.  The best part?  This is only the beginning.  The road to good, embodied cognitive models has been and continues to be long, but the scientists at NRL say it’s going to be well-worth the effort.

I guess you know how the old saying goes…

To err is human.  To learn how to err is robot. 

Jessica L. Tozer is a blogger for DoDLive and Armed With Science.  She is an Army veteran and an avid science fiction fan, both of which contribute to her enthusiasm for technology in the military.
Special thanks to the Naval Research Laboratory for providing the information and general awesomeness factor needed for this story.

15:03:2013 -- In the beginning was the code

Tuesday, March 19, 2013

15:03:2013 -- How Much Longer Until Humanity Becomes A Hive Mind?



Dear Friends,


Be Well.
David

How Much Longer Until Humanity Becomes A Hive Mind?


Brain-to-brain interfaces have arrived, and they are absolutely mindblowing
In a stunning first for neuroscience, researchers have created an electronic link between the of —... Read…

Last month, researchers created an electronic link between the brains of two rats separated by thousands of miles. This was just another reminder that technology will one day make us telepaths. But how far will this transformation go? And how long will it take before humans evolve into a fully-fledged hive mind?We spoke to the experts to find out.

I spoke to three different experts, all of whom have given this subject considerable thought:Kevin Warwick, a British scientist and professor of cybernetics at the University of Reading; Ramez Naam, an American futurist and author of NEXUS (a scifi novel addressing this topic); and Anders Sandberg, a Swedish neuroscientist from the Future of Humanity Institute at the University of Oxford.

They all told me that the possibility of a telepathic noosphere is very real — and it's closer to reality than we might think. And not surprisingly, this would change the very fabric of the human condition.

Connecting brains

My first question to the group had to do with the technological requirements. How is it, exactly, that we’re going to connect our minds over the Internet, or some future manifestation of it?

“I really think we have sufficient hardware available now — tools like Braingate,” says Warwick. “But we have a lot to learn with regard to how much the brain can adapt, just how many implants would be required, and where they would need to be positioned.”

Naam agrees that we’re largely on our way. He says we already have the basics of sending some sorts of information in and out of the brain. In humans, we’ve done it with video, audio, and motor control. In principle, nothing prevents us from sending that data back and forth between people.

“Practically speaking, though, there are some big things we have to do,” he tells io9. “First, we have to increase the bandwidth. The most sophisticated systems we have right now use about 100 electrodes, while the brain has more than 100 billion neurons. If you want to get good fidelity on the stuff you’re beaming back and forth between people, you’re going to want to get on the order of millions of electrodes.”

Naam says we can build the electronics for that easily, but building it in such a way that the brain accepts it is a major challenge.

The second hurdle, he says, is going beyond sensory and motor control.


“If you want to beam speech between people, you can probably tap into that with some extensions of what we’ve already been doing, though it will certainly involve researchers specifically working on decoding that kind of data,” he says. “But if you want to go beyond sending speech and get into full blown sharing of experiences, emotions, memories, or even skills (a la The Matrix), then you’re wandering into unknown territory.”

Indeed, Sandberg says that picking up and translating brain signals will be a tricky matter.

“EEG sensors have lousy resolution — we get an average of millions of neurons, plus electrical noise from muscles and the surroundings,” he says. “Subvocalisation and detecting muscle twitches is easier to do, although they will still be fairly noisy. Internal brain electrodes exist and can get a lot of data from a small region, but this of course requires brain surgery. I am having great hopes for optogenetics and nanofibers for making kinder, gentler implants that are less risky to insert and easier on their tissue surroundings.”

The real problem, he says, is translating signals in a sensible way. “Your brain representation of the concept "mountain" is different from mine, the result not just of different experiences, but also on account of my different neurons. So, if I wanted to activate the mountain concept, I would need to activate a disperse, perhaps very complex network across your brain,” he tells io9. “That would require some translation that figured out that I wanted to suggest a mountain, and found which pattern is your mountain.”


Sandberg says we normally "cheat" by learning a convenient code called language, where all the mapping between the code and our neural activations is learned as we grow. We can, of course, learn new codes as adults, and this is rarely a problem — adults already master things like Morse code, SMS abbreviations, or subtle signs of gesture and style. Sandberg points to the recent experiments by Nicolelis connecting brains directly, research which shows that it might be possible to get rodents to learn neural codes. But he says this learning is cumbersome, and we should be able to come up with something simpler.

One way is to boost learning. Some research shows that amphetamine and presumably other learning stimulants can speed up language learning. Recent work on the Nogo Receptorsuggests that brain plasticity can be turned on and off. “So maybe we can use this to learn quickly,” says Sandberg.

Another way is to have software do the translation. It is not hard to imagine machine learning to figure out what neural codes or mumbled keywords correspond to which signal — but setting up the training so that users find it acceptably fast is another matter.

“So my guess is that if pairs of people really wanted to ‘get to know each other’ and devoted a lot of time and effort, they could likely learn signals and build translation protocols that would allow a lot of ‘telepathic’ communication — but it would be very specific to them, like the ‘internal language’ some couples have,” says Sandberg. “For the weaker social links, where we do not want to spend months learning how to speak to each other, we would rely on automatically translated signals. A lot of it would be standard things like voice and text, but one could imagine adding supporting ‘subtitles’ showing graphics or activating some neural assemblies.”

Bridging the gap

In terms of the communications backbone, Sandberg believes it’s largely in place, but it will likely have to be extended much further.


“The theoretical bandwidth limitations of even a wireless Internet are far, far beyond the bandwidth limitations of our brains — tens of terabits per second,” he told me, “and there are orbital angular momentum methods that might get far more.”

Take the corpus callosum, for example. It has around 250 million axons, and even at the maximal neural firing rate of just 25 gigabits, that should be enough to keep the hemispheres connected such that we feel we are a single mind.

As for the interface, Warwick says we should stick to implanted multi-electrode arrays. These may someday become wireless, but they’ll have to remain wired until we learn more about the process. Like Sandberg, he adds that we’ll also need to develop adaptive software interfacing.

Naam envisions something laced throughout the brain, coupled with some device that could be worn on the person’s body.

“For the first part, you can imagine a mesh of nano-scale sensors either inserted through a tiny hole in the skull, or somehow through the brain’s blood vessels. In Nexus I imagined a variant on this — tiny nano-particles that are small enough that they can be swallowed and will then cross the blood-brain barrier and find their way to neurons in the brain.”

Realistically, Naam says that whatever we insert in the brain is going to be pretty low energy consumption. The implant, or mesh, or nano-particles could communicate wirelessly, but to boost their signal — and to provide them power — scientists will have to pair them with something the person wears, like a cap, a pair of glasses, a headband — anything that can be worn very near the brain so it can pick up those weak signals and boost them, including signals from the outside world that will be channeled into the brain.

How soon before the hive mind?

Warwick believes that the technologies required to build an early version of the telepathic noosphere are largely in place. All that’s required, he says, is “money on the table” and the proper ethical approval.

Sandberg concurs, saying that we’re already doing it with cellphones. He points to the work of Charles Stross, who suggests that the next generation will never have to be alone, get lost, or forget anything.

“As soon as people have persistent wearable systems that can pick up their speech, I think we can do a crude version,” says Sandberg. “Having a system that’s on all the time will allow us to get a lot of data — and it better be unobtrusive. I would not be surprised to see experiments with Google Glasses before the end of the year, but we’ll probably end up saying it’s just a fancy way of using cellphones.”

At the same time, Sandberg suspects that “real” neural interfacing will take a while, since it needs to be safe, convenient, and have a killer app worth doing. It will also have to compete with existing communications systems and their apps.

Similarly, Naam says we could build a telepathic network in a few years, but with “very, very, low fidelity.” But that low fidelity, he says, would be considerably worse than the quality we get by using phones — or even text or IM. “I doubt anyone who’s currently healthy would want to use it.”

But for a really stable, high bandwidth system in and out of the brain, that could take upwards of 15 to 20 years, which Naam concedes is optimistic.

“In any case, it’s not a huge priority,” he says. “And it’s not one where we’re willing to cut corners today. It’s firmly in the medical sphere, and the first rule there is ‘do no harm’. That means that science is done extremely cautiously, with the priority overwhelmingly — and appropriately — being not to harm the human subject.”

Nearly supernatural

I asked Sandberg how the telepathic noosphere will disrupt the various way humans engage in work and social relations.

“Any enhancement of communication ability is a big deal,” he responded. “We humans are dominant because we are so good at communication and coordination, and any improvement would likely boost that. Just consider flash mobs or how online ARG communities do things that seem nearly supernatural.”

Cell phones, he says, made our schedules flexible in time and space, allowing us to coordinate where to meet on the fly. He says we’re also adding various non-human services like apps and Siri-like agents. “Our communications systems are allowing us to interact not just with each other but with various artificial agents,” he says. Messages can be stored, translated and integrated with other messages.

“If we become telepathic, it means we will have ways of doing the same with concepts, ideas and sensory signals,” says Sandberg. “It is hard to predict just what this will be used for since there are so few limitations. But just consider the possibility of getting instruction and skills via augmented reality and well designed sensory/motor interfaces. A team might help a member perform actions while ‘looking over her shoulder’, as if she knew all they knew. And if the system is general enough, it means that you could in principle get help from any skilled person anywhere in the world.”

In response to the same question, Naam noted that communication boosts can accelerate technical innovation, but more importantly, they can also accelerate the spread of any kind of idea. “And that can be hugely disruptive,” he says.

But in terms of the possibilities, Naam says the sky’s the limit.

“With all of those components, you can imagine people doing all sorts of things with such an interface. You could play games together. You could enter virtual worlds together,” he says. “Designers or architects or artists could imagine designs and share them mentally with others. You could work together on any type of project where you can see or hear what you’re doing. And of course, sex has driven a lot of information technologies forward — with sight, sound, touch, and motor control, you could imagine new forms of virtual sex or virtual pornography.”

Warwick imagines communication in the broadest sense, including the technically-enabled telepathic transmission of feelings, thoughts, ideas, and emotions. “I also think this communication will be far richer when compared to the present pathetic way in which humans communicate.” He suspects that visual information may eventually be possible, but that will take some time to develop. He even imagines the sharing of memories. That may be possible, he says, “but maybe not in my lifetime.”

Put all this together, says Warwick, and “the body becomes redundant.” Moreover, when connected in this way “we will be able to understand each other much more.”

A double-edged sword

We also talked about the potential risks.


“There’s the risk of bugs in hardware or software,” says Naam. “There’s the risk of malware or viruses that infect this. There’s the risk of hackers being able to break into the implants in your head. We’ve already seen hackers demonstrate that they can remotely take over pacemakers and insulin pumps. The same risks exist here.”

But the big societal risk, says Naam, stems entirely from the question of who controls this technology.

“That’s the central question I ask in Nexus,” he says. “If we all have brain implants, you can imagine it driving a very bottom’s up world — another Renaissance, a world where people are free and creating and sharing more new ideas all the time. Or you can imagine it driving a world like that of 1984, where central authorities are the ones in control, and they’re the ones using these direct brain technologies to monitor people, to keep people in line, or even to manipulate people into being who they’re supposed to be. That’s what keeps me up at night.”

Warwick, on the other hand, told me that the “biggest risk is that some idiot — probably a politician or business person — may stop it from going ahead.” He suspects it will lead to a digital divide between those who have and those who do not, but that it’s a natural progression very much in line with evolution to date.

In response to the question of privacy, Sandberg quipped, “Privacy? What privacy?”

Our lives, he says, will reside in the cloud, and on servers owned by various companies that also sell results from them to other organizations.

“Even if you do not use telepathy-like systems, your behaviour and knowledge can likely be inferred from the rich data everybody else provides,” he says. “And the potential for manipulation, surveillance and propaganda are endless.”

Our cloud exoselves
Without a doubt, the telepathic noosphere will alter the human condition in ways we cannot even begin to imagine. The Noosphere will be an extension of our minds. And as David Chalmers and Andy Clark have noted, we should still regard external mental processes as being genuine even though they’re technically happening outside our skulls. Consequently, as Sandberg told me, our devices and “cloud exoselves” will truly be extensions of our minds.

“Potentially very enhancing extensions,” he says, “although unlikely to have much volition of their own.”

Sandberg argues that we shouldn’t want our exoselves to be too independent, since they’re likely to make mistakes in our name. “We will always want to have veto power, a bit like how the conscious level of our minds has veto on motor actions being planned,” he says.

Veto power over our cloud exoselves? The future will be a very strange place, indeed.

Top image: agsandrew/Shutterstock, Nicolesis lab.

Monday, March 18, 2013

13:03:2013 -- Building Unbreakable Codes Beyond The Bounds Of Earth



Dear Friends,


Be Well.

David

Building Unbreakable Codes Beyond The Bounds Of Earth

Rights information:
http://bit.ly/QafmwK
 

Quantum satellites may beam down powerful data encryption keys.
Originally published:
Mar 13 2013 - 1:30pm
By:
Joel N. Shurkin, ISNS Contributor

(ISNS) -- Scientists are pushing to create a space-based quantum communications network that could enable impossible-to-monitor transmissions.

In doing so, they might make it possible for someone named Scotty to really teleport some information into space.

It would be enough "to spook" Albert Einstein, said Thomas Jennewein of the University of Waterloo in Ontario, one of the top researchers in the field.

The encryption research could have immediate practical implications. The process would make use of entangled photons, what Einstein--who resisted the consequences of quantum theory until his death --called "spooky action at a distance."

"If we can use correlations between entangled photons to establish a quantum key, it could be used for secure communications," said Jennewein.

Einstein and two colleagues theorized in 1935 that if you had two quantum systems that interacted, such as two atoms in a molecule, and then separated them, they would remain entangled, meaning their properties would be inextricably linked. Measuring one atom would instantly produce a change in the other no matter how far apart they were.

Einstein believed that there was a universal speed limit: nothing could travel faster than light so he thought such communication—"spooky action"—would be impossible.

But in 1972, a group of U.S. scientists showed that is exactly what happens, at least over the short distances of their laboratory experiment.

Decades before, another physics giant, Werner Heisenberg, proposed in his famous uncertainty principle that merely  observing a particle or otherwise disturbing it changes its properties, and--according to quantum theory--so instantly would that of its entangled twin.

Common encryption involves using keys, series of numbers, and letters that code and decode messages. The sender has one key that encrypts the message; the person receiving the message has another which decodes it.

Scientists can envision sending beams of quantum signals from one place to another to produce encryption keys, but there is a problem.

Quantum communications signals have not been able to travel very far on Earth. The current record is 89 miles set in the Canary Islands by Jennewein and a team, then of the University of Vienna. The problem is transmission loss or scattering in the atmosphere.

Even using fiber-optic cables is not the answer, according to Joshua Bienfang, at the National Institute of Standards and Technology, another expert in the field. The chances of a single photon traveling safely more than around 250 miles in a fiber-optic cable is slim, he said.

That's why Jennewein and other researchers are looking to space, where the beams would not scatter in the vacuum. His lab, among others, now has produced a design for such satellites that would test that out.

Jennewein describes a system in which a device in a satellite creates entangled photon pairs and simultaneously transmits one of each pair to two ground stations in beams of millions of photons, all in entangled quantum states. That means both stations should have the same key.

The two stations would compare them. If the transmissions were not intercepted or modified by an eavesdropper, the two keys should be identical. The sender can then send a conventionally encrypted message secure in the knowledge no one is listening.

But, if there is any alteration in the keys, which would happen if anyone intercepted the key message, Heisenberg's theory would strike, and the photons would be altered. The two parties would know if there was an eavesdropper and either resend the keys or try another system.

Several corporations and government research facilities around the world are working on similar satellite systems.

"Moreover, long-distance 'quantum teleportation' experiments could be conducted--the first baby steps towards realizing the famous Star Trek 'Beam me up, Scotty' command may be only a few years away," Jennewein wrote in the magazine "Physics World." In quantum teleportation, actual objects themselves are not beamed up. Instead, their information—encoded in a quantum state—would vanish from a particle on Earth and then reappear in a particle in space.

The scheme would require three photons, Jennewein said. One, the input photon, to be teleported, and two others, entangled and separated.

"The input photon is correlated with one of the entangled ones, and thereby its quantum state is fully transferred onto the other entangled photon, which can be at a distance," Jennewein said. "The final photon is the new 'original,' and the initial photons completely lose their information."

An additional benefit of developing a quantum satellite system is that it would enable physicists to test quantum theory over much greater distances.

Joel Shurkin is a freelance writer based in Baltimore. He is the author of nine books on science and the history of science, and has taught science journalism at Stanford University, UC Santa Cruz and the University of Alaska Fairbanks.



esoteric



SUBTITLES IN ENGLISH, ESPAÑOL, PORTUGUÊS

Click upon the circle after the small square for captions

MUFON

How to Digitally Record/Video a UFO sighting:


Como registar digitalmente ou gravar um vídeo de um avistamento de um UFO:




Stabilize the camera on a tripod. If there is no tripod, then set it on top of a stable, flat surface. If that is not possible lean against a wall to stabilize your body and prevent the camera from filming in a shaky, unsteady manner.

Estabilize a camera com um tripé. Se não tiver um tripé, então coloque-a em cima de uma superfície estável. Se não for possível, então encoste-se a uma parede para estabilizar o corpo e evitar que a camera registe de maneira tremida e instável.

Provide visual reference points for comparison. This includes the horizon, treetops, lampposts, houses, and geographical landmarks (i.e., Horsetooth Reservoir, Mt. Adams, etc.) Provide this in the video whenever is appropriate and doesn’t detract from what your focus is, the UFO.

Forneça pontos visuais de referência para comparação. Isso inclui o horizonte, cimo das árvores, postes de iluminação, pontos de referência geográficos (como o Reservatório de Horsetooth, Mone Adams, etc) Forneça esses pontos no vídeo sempre que for apropriado e não se distraia do que é o seu foco, o UFO/a Nave.

Narrate your videotape. Provide details of the date, time, location, and direction (N,S,E,W) you are looking in. Provide your observations on the weather, including approximate temperature, windspeed, any visible cloud cover or noticeable weather anomalies or events. Narrate on the shape, size, color, movements, approximate altitude of the UFO, etc and what it appears to be doing. Also include any unusual physical, psychological or emotional sensations you might have. Narrate any visual reference points on camera so they correlate with what the viewer will see, and thereby will be better able to understand.

Faça a narração do vídeo. Forneça pormenores sobre a data, hora, local e direcção (Norte, Sul, Este, Oeste) que está a observar. Faça observações sobre as condições atmosféricas, incluindo a temperatura aproximada, velocidade do vento, quantidade de nuvens, anomalias ou acontecimentos meteorológicos evidentes. Descreva a forma, o tamanho, a cor, os movimentos, a altitude aproximada onde se encontra o UFO/nave, etc e o que aparenta estar a fazer. Inclua também quaisquer aspectos pouco habituais de sensações físicas, psicológicas ou emocionais que possa ter. Faça a narração de todos os pontos de referência visual que o espectador irá ver e que, deste modo, será capaz de compreender melhor.

Be persistent and consistent. Return to the scene to videotape and record at this same location. If you have been successful once, the UFO sightings may be occurring in this region regularly, perhaps for specific reasons unknown, and you may be successful again. You may also wish to return to the same location at a different time of day (daylight hours) for better orientation and reference. Film just a minute or two under “normal” circumstances for comparison. Write down what you remember immediately after. As soon as you are done recording the experience/event, immediately write down your impressions, memories, thoughts, emotions, etc. so it is on the record in writing. If there were other witnesses, have them independently record their own impressions, thoughts, etc. Include in this exercise any drawings, sketches, or diagrams. Make sure you date and sign your documentation.

Seja persistente e não contraditório. Volte ao local da cena e registe o mesmo local. Se foi bem sucedido uma vez, pode ser que nessa região ocorram avistamentos de UFOs/naves com regularidade, talvez por razões específicas desconhecidas, e talvez possa ser novamente bem sucedido. Pode também desejar voltar ao mesmo lugar a horas diferentes do dia (durante as horas de luz)para ter uma orientação e referência melhor. Filme apenas um ,inuto ou dois em circunstâncias “normais” para ter um termo de comparação. Escreva tudo o que viu imediatamente após o acontecimento. Logo após ter feito o registo da experiência/acontecimento, escreva imediatamente as impressões, memórias, pensamentos, emoções, etc para que fiquem registadas por escrito. Se houver outras testemunhas, peça-lhes para registar independentemente as suas próprias impressões, pensamentos, etc. Inclua quaisquer desenhos, esbolos, diagramas. Certifique-se que data e assina o seu documento/testemunho.

Always be prepared. Have a digital camera or better yet a video camera with you, charged and ready to go, at all times. Make sure you know how to use your camera (and your cell phone video/photo camera) quickly and properly. These events can occur suddenly, unexpectedly, and often quite randomly, so you will need to be prepared.

Esteja sempre preparado, Tenha sempre uma camera digital, melhor ainda, uma camera vídeo consigo, carregada e pronta a usar sempre que necessário. Certifique-se que sabe como lidar com a sua camera (ou com o seu celular/camera fotográfica) rápida e adequadamente. Esses acontecimentos podem acontecer súbita e inesperadamente e, por vezes, acidentalmente, por isso, necessita estar preparado.

Look up. Be prepared. Report. Share.

Olhe para cima, Esteja preparado, Relate, Partilhe.

MUFON.COM

ESOTERIC



Pf., clique no símbolo do YouTube e depois no quadrado pequeno, em baixo, ao lado direito para obter as legendas CC, e escolha PORTUGUÊS

埋め込み画像 4埋め込み画像 5

What time is Around the World?

Translate

AND YOU AND I - click image

JAIL THE BANKERS

ET CONTACT MEDITAWTION

NGC - UFO's in EUROPE (Porugal included)

FEBRUARY 7, 2013 - 7:00PM EST

FEBRUARY 7, 2013 - 7:00PM EST
T O R O N T O