So, apparently the scientists managed to succesfully teleport the states of the qubits, but fuck me, i have only very very misty idea, what that means and no idea, what are the implications of this discovery...
heres the link:
http://www.engadget.com/2011/04/18/first-light-wave-quantum-teleportation-achieved-opens-door-to-u/#disqus_thread
can somebody in laymans terms explain what happened and what it means for the future? Did they basically dicovered the Heisenbergs compensator from Star Trek?
@Sinperium
Be more specific. Computers today are 200x more powerful than "a dozen years back"? Dozen means like 10 to me. What does it mean to you?
So metamaterials are completely synthetic?
Don't understand how much computingpower is needed for the modelling of DNA, viruses'n'stuff. But then again, I don't understand exactly how powerful my own computer is (E8400 and GTX 570) nor do I get how complex the modelling of such stuff is. If there was a way to compare then it would be interesting. I'd like to see how much more powerful my computer is compared to my first one (AMD K.6-2 333Mhz).
How can cost be such a hindrance to medical science? I thought those research centers had good (in top 500) supercomputers and they work at incredible speeds (compared my own or your computer) and they buy new ones every year (they get money every year I presume).
Half a dozen--about six or so years ago. A lot of real breakthroughs happened in the late 90's and around the time the Sony PSP came out there were some things developed that really "oomphed" computers. In the past five years or so we've had several other things come along with the same or more potential.
Metamaterials are natural materials in combinations that don't occur in nature. So you want titanium with a carbon fiber molecularly bonded to it--that would be a metamaterial. Up until a few years ago there were very few metamaterials we could make and almost none of them were practical for any use or production.
In the past few years using lasers primarily but also some chemical processes, we've been able to make materials that we theorized might be possible but previously had no way to know how to actually make.
Here's some examples:
http://www.physorg.com/news/2012-01-breakthrough-superlens-cheap-simple-lens.html
http://physicsworld.com/cws/article/news/32464
http://physicsworld.com/cws/article/news/42043
http://www.physorg.com/news80488753.html
It's the increase in computational power that has let us start breaking through from theory to experiment in these things. The medical field has had a similar revolution with deciphering chemical and electrical processes in the body and mapping out and modeling genes, proteins and molecular compounds that can be created synthetically but don't exist naturally.
All of these things are due to cheaper, more powerful computers. A network of high end desktop pc's today can out perform and old Cray supercomputer computationally.
My uncle years back was assistant editor at the state capitol paper in Baton Rouge, Louisiana. They had a basement of a downtown highrise filled with state-of-the-art mainframes. They took up about the size of a swimming pool, cost $15 million dollars and were connected to "smart" terminals (basically what we call desktops now).
Playing the text-based game Zork on one terminal would bog down the entire network--I know because I had an angry editor throw me off more times than I can count when my uncle would sneak me in at night.
So a university can have a supercomputer or a parallel pc network but it costs a lot of cash and an entire university or research facility will be standing in line from each department wanting to run detailed computations that take months or even years to finish.
A quantum computer might do every departments computations in 30 seconds and could handle scales of queries that are beyond what a computer today is even capable of processing with existing software and hardware.
I'd have to do some research to show real-time scales on how much more powerful processors have become over time. Sounds like a job for a detail-centric Sith lord.
Here's a small teaser and this (no pictures here sorry).
The teaser was fine if a bit small to read. However, the second one is outdated by ten years. If computational power increase by a factor of two every eighteen months then within the last ten years that power should be somewhere in the neighborhood of, lets see, in 2002 XP was first coming out, Win 98 SE was pretty much the OS of choice and processor speed was somewhere around 500 mb p/s. FFWD three years and Vista is coming out, processor speed is up to 1 gig or better. 2012 sees Win 7, quad-core processors, speeds in excess of 4 gHz p/s ...... from 500 mHz to 4+ gHz ... an increase of roughly 4 orders of magnitude or thereabouts. And that's off the shelf. What's really available but not to us feeble folk, are machines capable of generating giant 3D images on buildings using lasers in a sort of holographic effect. Super computers that can model a Type IIa supernova event quite accurately. Control a stylus that can move atoms one by one across a surface. That's quite a jump in only ten years and still not even scratching the surface.
Just don't quote me on this. A lot of it is pure guesswork and what I've read in various articles.
"Teaser", not "research" (I gotta real life too ya know). Sorry for the teeny graph and dated stats--they are just good general info to get someone started and were grabbed by a quick search. There are plenty of facts out there to read on--you can take years reading all the detail you want.
It was changes in computer architecture--the actual design and function--that brought these big increases--and the same thing is happening now with multiple core gpus working in tandem with multiple core cpus, new smaller and more electrically efficient processors, etc., etc.
The US is currently building the world's fastest computer and it dwarfs anything that came before it in capability. Even that will pale in comparison though to working networks of functional quantum computers--and by "pale"--I mean, "seem like a joke".
We right now are at the point of almost being able to map the complete range of human emotion with facial expression alone--regardless of race, nationality or gender. We can "read" words people are thinking by interpreting brain waves--etc, etc.
If you want a crude analogy of what a quantum computer will be like, go here and play 20 Questions...if you never have you'll be astounded. 20 Questions is a simple program that can run on a small logic board and fit inside a plastic toy that sells for around $10. Imagine what a quantum supercomputer will be able to do.
For example, I thought, "statue" and 20 Questions got it in 19 guesses--a working quantum computer could make easily trillions of guesses in a a nanosecond.
Holy SHIT!!
submit or be destroyed
That "teaser" should have been drawn differently to illustrate the shear difference over time....
And I think it is even sharper a curve than that. Its simply incredible.
The teaser certainly looked linear.
(I think its because the Y was increasing exponentially)
I'm like working and stuff... I'll have to look at that later. For the moment, I just had to share this:
"A few years ago the wife and I were watching a "Nova" episode about the race between the US and USSR to capture/recruit as many German scientists as possible after WWII. At one point, the narrator said that the allies had finally tracked down Hiesenberg's hiding place and new exactly where he was. At that point I said to my wife that since they were so certian they knew where he was, they had absolutely no idea how fast he was going. She didn't get it."
Hehe.
Yeah, Siv...it's incredible especially if you just look at the past three years and what's coming as a result of it. We may have broken Moore's law completely.
Remember, Skynet is your friend and if HAL doesn't open your airlock, please don't argue about it--it's for the best.
The next steps are:
The whole idea of using entanglement for processing is that you dont have to try something 50 million times (what's known as the brute-force method) until you get to the right answer. Basically, you insert a key into door. This key has both no teeth and all possible teeth at the same time. It is a true skeleton key. The door lock checks to see if the correct key is inserted, and it is, but so is a bunch of incorrect keys. The magic part is that the act of checking can be used to destroy those other keys so that only the correct one is left. It's a completely different way getting the answer.
To compare, your AMD k 6 had 8.8 million transistors. Your E8400 (coincidently same as me) has 410 million transistors.
As for 'can it model DNA?', well the answer is yes. But you can do that with a ping-pong ball as well. It's all about the level of detail. The more computing power you have, the more details you can include in a simulation. Such details are only needed in particular situations, but help drive the overall understanding of the phenomena.
[Edit] I Just read that the new Ivy Bridge chips (My planned upgrade) will have 1.4 billion transistors.
Admittedly I am no expert, but I do believe that spintronics mostly is a group phenomenon in that it makes use of many electrons of a particular spin to affect a current flow or magnetic field...maybe I just misunderstood what you said, but your statement seems to imply using an individual electron to store information...
That seems highly improbable given that individual electrons are indistinguishable and their states decay rather quickly...furthermore, you can't "measure" the past spin of an individual electron as measuring it will collapse it into either a spin up or spin down state (you would need a large array of similar electrons, an expectation value, and a confidence interval)...so basically, every time you would "read" the electron, you'd erase whatever it was storing...
Seleuceia,
Yes, it would erase it when read, if done exactly in that fashion. I was talking about how it works in more of a theoretical sense. To engineer something in a practical manner, you often have to do things differently from the simpler theoretical model. So, I admit to some limitations in my discription. However, I do think you have mistaken some parts of spintronics, so let me try to go into more detail...
So in an ordinary CPU, in a theoretical sense, a single electron travels through a wire and enters a transistor. If the transitor is on, it passes through and keeps on going. If it's off, the electron goes to ground.Now, is this how CPUs are built? no. Voltage is sent through the wire, which is the result of a bunch of electrons flowing in a unified direction (it's a DC circuit). It takes a certain amount of voltage that persists for a certain amount of time to pass through the transistor and to turn it on. The advantage of this is that stray electrons do not interfere and you dont have to deal with quantum effects. In this sense, the electrons are part of a sea, and it takes a certain wave height and length to pass the information along. Future development does aim to 'reduce the sea level' to make circuits smaller and more efficient, and graphene, being a better conductor, should help on this front.
So for spintronic devices, theory has it working like I explained above. Practically... well, spin has to do with magnetism. In a sense, a magnetic field is produced by electrons transferring spin (it's a very confusing property. I honestly wish I knew more about it). So, in a practical device, a voltage of electrons would enter the spin transistor. Normally, the spins are balanced, so you can use some magnetic materials and determine whether there is a 'net spin' being carried, or 'no net spin'. How exactly this is accomplished, I do not know. But the concept is similar to volatges: you send a pulse of electrons with net spin to register a '1', and a pulse with no net spin is a '0'. You don't need to know exactly which way the electrons are 'spun', just that there is a net value. The important part is that you want to reduce the 'size' of the 'pulse' so that you can have more 'pulses' going through the system.
I'll look more into this to see what I can find out, as I dont know much about how this works
Hi Sith.
Quantum computers are going to happen--absolutely. There are already very simple working proofs of concept machines and it's actually a result of the processing power increases and the resultant metamaterials and the like that have allowed breakthroughs that look to provide the means to build the hardware and controlling circuits.
There are already experimental working proofs for short and long turn memory storage and retrieval, board circuits and substrates, on-off switches, one and two way controllable photonic circuits...that's just to name a few off the top of my head.
It's just getting it all together, setting up manufacturing and then making production models that's needed. After that, we can start to improve on them and find out what they can really do.
They already have been able to alter spin of electrons at will in research. Actually firmware and software are the real unknowns as much as anything right now.
From Wikipedia:
Charge carriers (such as electrons) have a property known as spin which is a small quantity of angular momentum intrinsic to the carrier. An electrical current is generally unpolarized (consisting of 50% spin-up and 50% spin-down electrons); a spin polarized current is one with more electrons of either spin. By passing a current through a thick magnetic layer, one can produce a spin-polarized current. If a spin-polarized current is directed into a magnetic layer, angular momentum can be transferred to the layer, changing its orientation. This can be used to excite oscillations or even flip the orientation of the magnet. The effects are usually only seen in nanometer scale devices.
Sinperium, decoherence is still a big problem. Sure, there have been experiments where 'components' of a future Qomputer (TM) have run without this issue, but as far as I am aware, these components are not running continuously and/or outputting to the outside world (which runs the risk of decoherence). Furthermore, these different components are not of designs that can just work together. they are of different technologies. Still, the fundamental principles have been proven. And, as I said, I think we will have Qomputers someday. However, it's still a ways off from being proven possible or even practical.
Right now, even if we could combine all the components, a Qomputer would not be any better than a regular computer, because we can't get enough qubits to compete. And, as you increase the number of qubits, you increase the risk of decoherence. So, it stands to reason that it might be impossible to make a machine with enough qubits that it can actually do the amazing things theoretically possible without decoherence spoiling the party. Also, no one knows how to write software for such a machine. There's some logic worked out, but how do you program it, exactly?
Once again though, I do think they are possible.
If it was easy we'd all be building barebones kits right now.
They'll get there.
Iit's well within the realm of possible now--practical is what they are working towards.
What bothered me about this statement is that you referred to "each electron", implying that each individual electron carries with it information, which is simply not the case...
Again, sending a single electron one at a time is very VERY different than sending a DC current, and the two should not be thought of as analogous...the physical and mathematical treatment of individual electrons is much different than the treatment of a large flow of electrons...to say that one electron is the theoretical model while a group of electrons is the practical model simply is a fallacy...a current of many electrons is both the theoretical and practical model...
I would just stick with this part of your statement about transistors depending on "high" or "low" signals...transistors and spintronics (as you have stated in your most recent post) both depend on a current of electrons, not individual electrons...
Clearly you understand the basic premise of spintronics, and if it seems like I'm being an ass I'm sorry...it's just that since we are discussing quantum computing, distinguishing between an individual particle and a flux of particles is very important...using the states of individual electrons (or particles) to store or transfer information is not spintronics and I just wanted some clarification from your "first" post...
Anyway...whether I have a transistor measuring "high"/"low" signals or a magnetic layer measuring net spin/zero spin, I still have a binary system...I'm not particularly familiar with the design of all electronics, but it would seem that in either case, I'm still transmitting the same amount of information per pulse...so:
I'm assuming this means then that they are moving towards devices with a transistor and magnetic reader "in series", so to speak...I'm guessing that the transistor first detects "high" signals and then the magnetic reader obtains further information by determining the existence of a net spin, yes?
IIRC, newer hard drive readers measure net spin instead of a high/low signal (as opposed to measuring both at the same time), so the information per pulse is still the same...perhaps the reason why they don't use both methods at once in circutiry has to do with the materials being used...maybe silicon just doesn't mesh well with spintronics?
It's not exactly new technology anymore so if "doubling" the information per pulse were viable I would think that it would have already been implemented...I'm guessing there is a practical limitation (probably material related) preventing such implementation...
Well, the trend is to reduce the voltage traveling through each wire to the transistor. The goal is to get it down to a point where we are sending only 1 electron. The possiblity of bumping up against quantum effects has long since been the feared end of moore's law. So, you're right, that's not the way it is right now. But it is the way we're headed, before we even reach a quantum computer.
From what I've read, you could potentially read the spin at the same time as a voltage measure. Yes, it is still a binary system, but now it only takes 4 transistors to operate on a byte. At first, it will probably be quite seperate and not be a true 'doubling of power'. You'll have transistors that measure spin, and those that measure voltage, and it'll basically function as an additional core, but eventually you can combine the two to occupy the same space.
You're intuiton is correct, spintronics have some trouble working correctly in silicon at the moment. I think graphene has been the material of choice, but then that has issues being too conductive.
As for the new/old technology aspect: it is kind of like what has been used on HDDs. the thing is that HDDs still store data as magnetic domains in the material, and are flipped via a magnetic field. But the field is generated by a spintronic read/write head. This was necessary because it became difficult to generate a focused magnetic field in such a small area. Then newer form of spintronics will be mostly eliminating the magnetic aspect and just be dealing with raw spin.
As an aside, I read recently that a new way to read/write to HDDs was discovered, and it would probably eliminate this part of spintronics: use lasers. apparently, a laser pulse can flip a field rather quickly.... enough that it would blow SSD access/write times out of the water; The only thing that has to be done is incorporate a laser as a read/write head, so this tech could also potentially come to the field quickly.
A final awesome piece of possible future computer tech is thermoelectrics. Materials that generate electricity from heat. There are a few different ways these could operate, but the common idea is that CPU's today generate alot of heat that can wear down the CPU over time, and has to be controlled within the case. Well, thermoelectrics can recover that energy and put it back into the system. I've also seen research that shows it can be directly converted into spin (this is almost a breaking of entropy, as there is no heat added to the system.). Whether this tech could just save electricity, or be directly involved with some of the computations remains to be seen
You know Sinperium... I don't see how you can stay optimistic about the scientific future. Obviously doubt is factor of utmost importance in Science, so there will always be some pessimism, but I'm talking about above and beyond that.
I just ran into a friend I haven't seen in a long time. While I wouldn't say she's a scientific genius by a long-shot-behind-the-back-left-handed, I think she's at least average-ish.
She tells me "I heard that the Earth is closest to the Sun than it has been in blahblah* years and that because of it, everyone weighs 10 pounds heavier right now." (*= some number I have failed to remember. kind of forgot to pay attention to that part because I was so distracted by the rest of the sentence). "No." I say, "While it is true that during the winter we are closer to the Sun than the rest of the year, that happens every year. Furthermore, the affect of the Sun is so small that it wouldn't have that big of an effect. Now, I can't say for whether we're any closer this year compared to every other year, but that doesn't mean-" "I saw a video on the internet of someone standing a broom straight up." "..." "Right now, you can take a broom and stand it on end because of this. And you weigh 10 pounds heavier"
I... didn't know where to start with all that. And this was a passing comment as I was leaving. Like, WTF is wrong with people?! YOU SHOULD KNOW THIS STUFF IS FALSE IN ORDER TO BE ALLOWED INTO 5th GRADE.
Just when I think this thread has passed it returns.
I'll give you two quotes from the late Isaac Asimov--whose works I grew up reading and who I agree with on many of his points (though not all).
The first is:
Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today - but the core of science fiction, its essence has become crucial to our salvation if we are to be saved at all.
It's really important to imagine and have faith in a real future--and that's particularly what most science fiction is about (especially in Asimov's day). Methodical science does not provide hope, has no morality and is no different than animal instinct. It can be performed soullessly and with no regard for humanity.
What gives life and value to science is the inspiration of the theorists and researchers who are guided by vision and that vision stems from their human experience--their shared humanity with all of us. So science is more than "numbers and methods" and is as altruistic as religion and faith at it's core. As a person of faith, I can fully relate to this and have no problem walking hand -in-hand with science and my own experience and beliefs--they both celebrate and inquire into the same mysteries and wonders of life and are driven by a desire to see and know what has only previously been imagined.
So as long as scientists are more than "biological computers" and are humans who encompass the spirit of human desire and hope--then there is always hope. In a time when people are less versed in science and see less value in it, then it's all the more important that those special individuals with greater vision, greater faith and greater imagination keep attempting to kindle that spark in others and shed light for those who have none. As long as science is guided by scientists who value people more than their art we are in good hands.
I think this is what Asimov was getting at. He himself held scientific credentials yet wrote profusely saying it gave his life meaning. It's interesting that a man of high intellect, with few (if any) spiritual beliefs--was driven to look beyond science at the potentials of it's impact on humanity. I admire that more than the science he practiced.
People need the actions and words and deeds of individuals to impart to them why there is hope and things to stand in wonder of.
Every researcher now looking beyond "theory" and "procedure" is as important as the armies of researcher's, engineers and mathematicians trying to prove a theory--they go hand in hand.
Asimov said that the core of science fiction is crucial to mankind's salvation. That's why I've prodded you so much on your focus on "practicalities over all else"...science is a two-sided coin.
Asimov also said:
It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.
It's really important that we all come to understand this is our salvation--in the sense of mankind having any hope of a good future and as long as we are bound here in this life, in this universe and in our limited dimension. People who already know this have a higher obligation to maintain hope for others--to have faith for them when they aren't capable of it themselves. That is what faith is for. Without it, there is no hope and the world will spiral down at every point when challenges exceed our capabilities.
Ultimately, we must have faith in ourselves, in our stories and in a future. If one can't do this, and practices "mere science", then that individual is simply gratifying their own intellect or serving as a drone in the greater scientific collective--blind and with no concept of the real significance of their work--a work that can change the future of mankind.
So yeah, I'm optimistic. I have faith in people driven by a higher vision and when they become scientists it can't help but be encouraging. I do have concerns about the future and as a person of faith and very cognizant that human limitations may be our downfall at any moment. But as long as there are individuals who don't surrender, there's always hope.
Had to add this one last quote by Asimov:
The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' but 'That's funny...'
Now I am going to go get in an elevator and go down so I can jump up at the end and be weightless. I hear the planets are aligned and their gravity will catch me.
I appreciate what you are saying, and the motivational attempt as well.
My viewpoint though has always been that Science should be done objectively. You say "Let's see what happens", and not "How can we make this?". That's an engineer's job. The engineer uses the work of scientists to guide him. If he doesn't he's really an Alchemist; using superstitions to make things happen.
Obviously, engineers do have to do some experiments. For example, a new bridge is load tested to make sure it can hold more weight than what it is rated to hold; this is done because the specific bridge design with the specific materials in the specific environment was never tested before, even though each individual aspect might/should have been tested separately. To be clear, if some engineer decided to build a bridge of record length out of Jello... that's Alchemy. The proper way to approach something like that is to have the structural strength of Jello tested and test the building design as a function of material strength and resilience.
The theorists job is to take the new experiments and see if they can predict the outcomes of future experiments. In this way, the engineers can focus in on obtaining specific results from things.
Anyways, I think my original point was that people are idiots and the disease is spreading. 'Hope' doesn't make room-temperature superconducters possible, but many people working on the problem might. In order for civilization to progress, we need more people to understand the basic ideas and results of science and less people to object (not because objections are bad, but because they just refuse to see reason).
You have to have people inspired to make them and people who inspire others to use them or want them made. It works together.
People who surrender the struggle win no one.
I disagree. I like physics because it's interesting. I like the way subtle little details combine to create something counterintuitive. It's fun to play with these ideas and see what comes out. I think that's why most physicists are physicists. Not because they're inspired to make the equation that will allow someone to make a teleporter or to build a device to test whether a particular kind of dark matter is out there.
What you're thinking of is more of an inventor. That requires vision and inspiration.
Sinperium, since you're a Sci-fi guy, have you ever seen the movie Primer? Good film. The guys there were not trying to build a specific thing so much. I mean, I'm sure they had a goal to their design, but they were just building for the fun of it. Along the way, they discovered something (I'm not going to go into it if you haven't seen it. Half the fun of seeing it is figuring out what the heck is going on) and it led to new things to 'play' with, new discoveries, and they learned they could do more.
Obviously, that was a movie, so there's a lot there that I would not advocate adopting for the scientific community.
Yep--saw that movie--and you're right with your distinction between inventing and curiosity. I think they go hand in hand. Inventing does involve very much the "seeing how something works" part too.
Some inventors are crass and only looking at a way to achieve a financial goal with no consideration for elegance or true invention and I think some "number crunchers" do the same thing. It's like sudoku to them. They just put pieces together and then grab a new puzzle with no eye vision of anything bigger driving them. The real inventors (both engineering and mathematical) are to me the ones, "looking for the prize".
Distributed human computing works well many times by throwing a lot of people with different goals and approaches at the same problem--even when they might not realize it.
I also know you're going to throw a can at me as I don't have the specifics but I just saw an article this past month referencing some very far distance galactic cluster with a mathematically extremely lopsided distribution of "what we presently call dark matter".
What was interesting to me about it was that the observing scientists speculated that it may be the first observation of a point in space where we are seeing an observable large scale effect resulting directly from forces within our own universe not fully obeying the universal laws of physics.
That would be really cool.
P.S. I loved Sagan's device in Contact that when constructed seemed to observably do nothing--also very cool.
"If it takes a village to raise an idiot then perhaps we should destroy that village."
And the idiot that believes that is the solution has just multiplied the problem. Now we're getting idiots^2
As for the dark matter topic, they've been able to map dark matter distributions for awhile. Just a bit of dark matter... Primer.... : The reason Dark matter is thought to exist is because if you look at all the stuff in a galaxy and observe the way it moves/spins, the math doesn't jive. There has to be a lot more mass there that we just cant see.
Okay, so now that is out of the way. The thing is that just because we don't know what it is doesn't mean it's not obeying the 'universal' laws of physics (poking fun at you a bit. what's with the 'universal'?). So, the reason we know it is there is due to gravity, and gravity bends space; therefore, concentrations of dark matter will bend space. So, by measuring the bent light from these galaxies, you can geometrically work out where the dark matter is and roughly how much is there.
Things learned from this technique:
Back to the quantum computing topic... http://www.sciencedaily.com/releases/2012/04/120404161943.htm
I'm glad they got that far, though it is only 2 qubits. Their design is also not repeatable (ie, you can't build 5 of this same device) due to the fact that they used flaws already in the diamond.
There are many great features available to you once you register, including:
Sign in or Create Account