AI and God

AGI, or Artificial General Intelligence, is the holy grail of much AI research. It is an AI that can learn anything, at least anything a human can learn. If we could achieve it, humans would never need to work again, or at least the nature of our work would shift far more dramatically than it ever has in human history.

Some people, particularly AI “doomers” (people who think achieving AGI strongly threatens an apocolypse), seem to believe that if we achieved AGI, it would possess magical abilities to break all encryption or determine objective truths.

My use of the word “magical” reveals what I think about this notion: it is utterly foolish, preposterous, ridiculous, and just plain stupid!

Consider, for instance, the halting problem. Can we write a computer program that takes in another program and tells us whether it will come to a halt, or run forever? Alan Turing proved this to be mathematically impossible. No such program can be written. AGI won’t be able to do it either.

Similar with encryption; AGI will not magically discover number theory impossibilities that suddenly allow all encryption to be broken in a practical amount of time.

AGI will not be able to break mathematical limits that we are already certain of. Why do some people seem to imagine that it will be able to do impossible things like this?

Perhaps the silliest notion of all is that AGI will somehow be able to spit out objective truths, somehow avoiding the ambiguities that result in human intelligences’ conflicting conclusions. Where the heck would such objective conclusions come from? Will it be privy to some magical data that humans cannot perceive? How would it get such data? Will it recognize secret codes in the data we train it with?

Even with human intelligence, we can draw multiple conflicting conclusions from the same data. See my previous post about the meaning of facts (i.e. data). When we come to conflicting conclusions, what do we do? We expirement! If we can, at least. (Otherwise we just argue about it, I guess.) And the point of such experimenting is not to find objective truth, since we can’t, but rather to be able to make useful predictions. Doing this leads to that, so if you want that, do this. And then we build on it. Hmmm, so if this leads to that, does this other related thing lead to that other related thing? Experiment, find out. (On a side note, AGI is, in my opinion, all about figuring out how human intelligence is capable of making that leap from one set of relations to another, or, to put another way, how we are able to generalize predictive relationships. It comes naturally to us (to some more than others), but we have no idea how to program a computer to do it.1)

So Dilbert creator Scott Adams asks some silly questions on Twitter regarding AI and God:

I shall now try to answer these questions:

1. No, because that’s not what God is.

2. Is that a question? Anyway, here Adams seems to be supposing that AI, or AGI, is synonymous with conscious experience itself, which is quite a leap! Even if we believed it, why should that mean anything to a human, whose intelligence is not, by definition, artificial? Finally, I’m not sure what Adams’s understanding of free will is. Free will is the experience of making a conscious choice. It is not (necessarily) the universe’s ability to do something magically undeterministic in a human brain. (For instance, see compatibilism.)

3. Yes; where does Adams think beliefs in souls comes from? For that matter, how would a human know if a robot is “way smarter”? We’d need some way to relate to it, to find meaning in its output.2 But it’s still a non-sequitur to conclude that it would somehow conclude something about the existence of souls based on some necessarily knowable given data, and that such a conclusion would then be objective. One might as well doubt the existence of souls because some “way smarter” atheist says so.

4. How religions are “created”, in the general sense, has nothing to do with faith in them. That’s like doubting the usefulness of a scientific invention by learning how it was invented. Also, is an AI “that never lies” supposed to be the same as an AI that is never wrong? Because that cannot exist, as explained above.

5. How would AI come to such a conclusion? From training data? Or it opens up a spiritual portal to the God dimension?

All these questions seem to be based on a belief that some powerful AI would gain some kind of spiritual perception from data alone.

To be fair, these questions do point to the philosophical conundrums inherent in a materialistic / deterministic understanding of the human brain and its ability to perceive and believe in God. We don’t know how the brain does it. One could say, “Oh, one just gets it from his parents!3” but that is hardly a satisfactory explanation. Firstly, it implies either an infinite regress, which explains nothing, or that some human was the first to create the idea, which just leads back to the initial question of how it was possible for a human brain to do so. Secondly, even if learned, the human brain must have some prior ability to perceive its meaning; where does this come from? How did it form? I ask such questions not to imply that a supernatural cause is required (that’s a separate issue / argument), I’m only pointing out that it’s something we don’t yet understand from a scientific point of view. (And understanding it would not shake one’s faith, anymore than thinking that understanding that two and two is four is manifested as neural signals in your brain makes two and two not actually four. That is, if you are to understand something to be true, it will obviously be reflected in a physical manifestation in your brain somehow.)

Questions of objective truth aside, we could then ask: could a sufficiently advanced AI believe in and perceive God as humans do? It’s certainly an interesting question, but it implies nothing about human belief in and of itself, because, again, it would give us no greater pathway to objective truth.

Finally, to answer Sam Altman (in the tweet Scott Adams was quoting): It’s a tool. You did not create a creature. Don’t flatter yourself!

So those were just some random ramblings on AI and God. I hope you enjoyed. It was all actually written by AI!

Just kidding. But what if it was?!

(Artwork is by AI though, obviously. DALL-E 3.)

Jurassic Park as plotted by AI

Lately I’ve been fooling around with play.aidungeon.com, particularly its “Dragon” model, which is perhaps based on GPT-3 (though I’m not sure). While the app is biased towards generating second-person adventure game text, I have found it fun to feed it some plot summaries and let it generate a continuation. The results are nonsense, illogical, and inconsistent, but funny.

In regards to story writing, the app can be a lot of fun for generating random ideas, but it’s just about useless (so far as I can tell) for generating appropriately constrained ideas, which are far more important to story writing. Stories, after all, have to go somewhere. Plots develop, characters develop, tensions rise and fall, etc. With only random ideas, the story just kind of meanders around randomly. Perhaps some of its pointless meandering can be tamed with proper prompting, but I have not yet found an effective strategy to achieve this. Perhaps future apps will be better designed for such a purpose, but perhaps it is just an inherent limitation with the AI model itself.

Below are the results of one of my expirements with having the app continue the plot of Jurassic Park. The first three paragraphs are copied from Wikipedia’s plot summary of the film, and the rest is from the AI. You can see that it makes very little sense, and would make a pretty terrible movie, but I found it humorous.

Another experiment involved having it continue the plot of the film Clue, in which Mr. Boddy’s ghost terrorizes the guests and Mrs. Peacock kills Mr. Green with an icicle. In yet another, the AI continued a biography of Mozart by having him get married at age nine and grow up to play pianos in taverns and murder his cousins.

Here’s the Jurassic Park plot continuation (the first three paragraphs are the prompt):

Industrialist John Hammond has created a theme park of cloned dinosaurs, Jurassic Park, on Isla Nublar, a Costa Rican island. After a dinosaur handler is killed by a Velociraptor, the park’s investors, represented by lawyer Donald Gennaro, demand that experts visit the park and certify its safety. Gennaro invites mathematician and chaos theorist Ian Malcolm, while Hammond invites paleontologist Dr. Alan Grant and paleobotanist Dr. Ellie Sattler. Upon arrival, the group is shocked to see a live Brachiosaurus.

At the park’s visitor center, the group learns that the cloning was accomplished by extracting dinosaur DNA from mosquitoes preserved in amber. DNA from frogs was used to fill in gaps in the genome of the dinosaurs. To prevent breeding, all the dinosaurs were made female. Malcolm scoffs at the idea, saying that it will inevitably break down. The group witnesses the hatching of a baby Velociraptor and visits the raptor enclosure. During lunch, the group debates the ethics of cloning and the creation of the park, and Malcolm warns about the implications of genetic engineering.

The group is joined by Hammond’s grandchildren, Lex and Tim Murphy, for a tour of the park, while Hammond oversees the tour from the control room. The tour does not go as planned, with most of the dinosaurs failing to appear and the group encountering a sick Triceratops. It is cut short as a tropical storm approaches Isla Nublar. Most of the park employees leave for the mainland on a boat while the visitors return to their electric tour vehicles, except Sattler, who stays behind with the park’s veterinarian to study the Triceratops.

[AI’s continuation]

Meanwhile, Malcolm enters the control room to find that the computer system has been sabotaged. He hacks into the system and finds that Hammond is dead and his son Michael has taken over. As he attempts to escape, he runs afoul of an angry Tyrannosaurus rex who chases him through a tunnel leading out of the building. He returns to his car where he is attacked by two more T-Rexes.

Malcolm manages to drive away, but is pursued by a third T-Rex. A fourth one follows, which knocks him off the road and onto a rocky hillside. He calls out for help and is rescued by Ellie Sattler, who drives up alongside his car and opens fire on the pursuing Tyrannosaurs. She kills them both with her rifle before they can reach Malcolm again.

Back at the visitor center, Malcolm tells Ellie that she saved his life and that he owes her a debt of gratitude. She asks if he wants to come back to the island with her, but he declines. He says there are things here he needs to do first. Meanwhile, Hammond’s grandson Lex has returned from the mainland and reports what happened to his father. While he’s explaining the situation, Hammond notices through a window that one of the rival bioengineers, Ed Regis, is in his proximity.

Hammond rushes out and shoots Regis multiple times before being restrained. He declares that they are at war, before ordering the computer system to be destroyed with a flamethrower.

The film then cuts to the outside of the visitor center, where there is a scream of pain from Ian Malcolm as the building explodes.

The following morning, law enforcement arrives on the island. Gennaro briefs them on the situation. While searching for survivors, they discover that all of the dinosaurs have been killed. The footprints discovered near the visitor center belong to a henchman working for the island’s owner, Peter Ludlow. Ludlow had learned that Hammond was dead and believed that he could get away with claiming the island. He also realized that he was trapped there with several hostile dinosaurs.

The second coup happens in the hospital room where John Hammond lies dying. His granddaughter Kelly has taken over his role as head of InGen and has accelerated the timetable for building a new Jurassic Park in San Diego. However, a police sniper takes her out from a nearby building, mistaking her for Ludlow. The blame shifts to Ludlow. The actual culprit is Ed Regis, who storms the room.

After killing Ludlow with a gunshot to the face, he shouts out that he didn’t kill anyone and that Ian Malcolm was the murderer before escaping via helicopter. With InGen destroyed, Hammond dead, Ludlow dead and Regis on the run, a third shadowy group is suggested as being responsible for the attacks. The investigation suggests that they have no known motive or target.

The book ends with ‘DNA donkeys’, a note about the next entry in the series Tey cats and a reminder that there are fourteen new engravings made out to readers of this book.

The meaning of facts

Someone shared this comic on facebook (from drawninpowerpoint):

I criticized the comic because, although the comic clearly portrays one character as the more ignorant, the two characters are really behaving similarly; their viewpoints are just based on articles from sources they trust. They are not doing the work of exploring or questioning the reasons they trust those sources in the first place, or the implications of accepting what those sources say. One character just takes for granted that her source is “objective” and “scientific”, as though the other character would just accept such an analysis at face value. Not only are they not questioning the foundations of their disagreement, they seem unaware that such foundations even exist; it’s just one source of information versus another.

This points to the larger issue that these characters (and many people in the real world) seem to take for granted: facts are never “just facts.” News, even if it is accurate and factual, is never “neutral.” The facts, the news stories, are embedded with meaning. Editors at a news outlet (or even a prestigious scientific journal for that matter) selected that article or that set of facts for a reason. You, as the reader, will interpret the meaning of those facts. How you interpret that meaning will be based on a lot of personal factors, but it won’t be objective. A fact, in and of itself, may be objective, but its meaning is never objective. The meaning of a fact must be formed through your understanding of the world, your interests and values, and even the choices you’ve made.

So arguing with someone that, “You just believe that because you watch too much Fox News!” or “You just believe that because that’s the consensus on Tumblr or Twitter!” is a useless argument, a sort of reverse appeal-to-authority fallacy. Perhaps it is true, but it takes for granted that there’s some other source more worthy of trust. That is, what news sources you trust is itself founded on something deeper, including the way you form meanings from facts.

This also goes for arguments of “this news is neutral, this is more biased, etc.” It’s all biased because it’s all filtered. And you get meaning from that filter whether you like it or not, so you might as well be conscious of it and think about it while you consume it. Why is this news outlet reporting this news story? The significance may or may not be political or controversial, but a reason exists, even if you do not have enough information to guess why.

And what about all the facts and the news you don’t know about because it never reaches you? You can’t use news that doesn’t reach you to form any meaning at all! But it still exists.

My point with all this is not to argue that news should be “more fair” or “more objective” or anything. My point is that news can never be “fair” or “objective” in the first place, so you, as a consumer of news, should be aware of the set of presuppositions with which you form meaning from the news, and you should think about what meaning the presenter of the news wants you to have, whether or not it’s controversial.

A digression, but this is the same sort of problem I’ve ranted about before in regards to our formal education system in the US, especially in the higher grades. Many parents, students, and teachers take a lot learning material for granted. So much of what is taught in high school and college is just useless information because the student is never going to use it. The facts lack meaning. Very few students are going to end up using chemistry and calculus, and certainly not to the extent that they need to memorize and regurgitate a bunch of facts about them this year or else. But then the student grows up and forces his child through the same wasteful system.

This whole topic is also interesting to me because it relates quite a bit to artificial intelligence. What does it mean for a set of facts to “mean” something at all? How could we program a computer to form “meaning” from a set of facts? It’s easy to understand how a human might do it, but when we try to define it formally, it’s like trying to catch a cloud. Get too close to a cloud and you lose the shape of it and you’re just lost in a fog. But I find it a fascinating question.

This also relates to how science is not nearly as “objective” as the usefulness of the scientific method may make it seem. Much of what we call “science” is in fact subjective interpretation, the forming of meaning from facts. The scientific method provides a useful way of honing in on the most practically useful sets of factual interpretations, but they remain just that: useful interpretations. Not immortal objective truths. This does not mean immortal objective truths about the material world don’t exist, only that science doesn’t tell us what they are; rather, science only provides us with a “most useful guess for a given set of purposes based on a given set of data.” (The scientific method should also not be confused with merely interpreting meaning from statistical data; collected data for which we could not control certain variables is much more tricky to interpret, despite our mind’s natural inclination to do so.)

I still want AI that can help me write a novel

Just last month, OpenAI1 released a paper about their results with GPT-3, an AI human language model system which can be trained on many mountains of text, and then generate its own text based on given prompts.

It looks quite impressive! Impressive too are some of GPT-3’s creative writing results from freelance writer Gwern Branwen. Still has plenty of weaknesses, in humor and logic for example, so it won’t be replacing novelists yet, but I’m particularly impressed with GPT-3’s continuation of a scene from a Harry Potter fanfiction. I wouldn’t copy and paste the results, but it looks like it would be great for generating story ideas, both in a novel’s overall plotting stage, and at the actual scene-writing stage. I find the scene-writing stage to be the most tedious and mentally demanding (hence why I’ve procrastinated on doing it for a few years now); I would love to have a program that continually generated ideas for directions a scene could go, either by having it generate a possible continuation or answering prompts with ideas, such as “How might this character respond to this situation?”.

Other possibilities with GPT-3 (or future models) are equally exciting. I’d love to see GPT-3 or something like it applied to things like:

  • Dialog for non-player characters in video games
  • Cohosting a podcast with me
  • Generating comments for this blog so it looks like I have more readers
  • Being an imaginary friend because I’m sad and lonely

One weakness of GPT-3 (and most neural-network based AI for that matter) is that we may not be able to see directly how it generated its answers to prompts. That is, how do we know it’s not plagiarizing or stealing too many ideas from its training data? It may become a thorny issue for some uses.

David Cope’s older algorithmic music generating system, for example, had similar problems. This is I believe 20-something years old, but here’s a computer-generated piece in the style of Mozart:

Sounds great, but if you’re familiar with Mozart, it’s actually not that impressive; there’s just too much Mozart that’s been too directly copied; it’s just not “creative” enough. A patron of Mozart would likely be dismayed, “this is just a rehash of this and that symphony; I want something in your style, but more fresh!”

I doubt GPT-3 always copies from its training data that overtly, but the possibility could still be a problem.

The other big problem, from my perspective at least, is cost. GPT-3 requires too much computer power that I can’t afford to pay for. OpenAI will probably target enterprise users for their first customers, not poor novelists.

There will probably be other options though. For example, there is the recently launched InferKit which I believe is based on GPT-2. Maybe I’ll experiment with that as the pricing seems fair enough, but my previous creative fiction results with GPT-2 weren’t great, especially when it would have characters from other copyrighted novels like Gandalf pop into scenes. I probably just have to hone in on some good methods for idea-prompting.

Anyway, the future of AI continues to fascinate and excite me!

AI generated movie trailer fails to impress

IBM’s Watson supercomputer AI has created a trailer for an AI horror film! Oh my! How interesting! How ironic! How impressive! IBM is full of geniuses! Let’s watch!

Erm… ok…

Alas, I am not at all impressed with the result. This trailer tells me hardly anything about the story. I fear we’ll have to wait until AIs actually “understand” language and story (or at least analyze these elements a bit more closely) before they can create trailers that resonate with humans. Who are the characters? What’s the main conflict of the story? What’s the spiritual (inner) conflict? What’s the hook? Etc. Trailers are not just a collection of tone shifts. What stupid investors are investing in IBM based on this sort of nonsense? (And how can I get some of their money myself?)

Anyway, what we end up with is not so much a “movie trailer created by AI” as though “AI” were some generic mysterious black box. Rather, it’s a movie trailer created in some algorithmic fashion that a human (or group of humans) designed. Which, of course, is what all “AI-generated” products amount to — human-created algorithms to mimic and automate processes we may not necessarily understand.

And therein lies the true goal of “AI research”. The point is not to create a robot that can do everything a human can do but remains just as mysterious as a human brain. The point is to understand what intelligence actually is in the first place. And when we understand that, we may find we don’t need or care about sophisticated human-like robots anyway. And any sort of creepy fear that comes from wondering about the possibilities of rogue robots or the nature of digital consciousness is the result of human idiocy, spiritually and logically. Spiritually in that consciousness is not merely an emergent property of matter (we are not just meat robots). Logically in that if we could design a robot capable of “going rogue” then we can just as easily design it to not “go rogue” in the first place.

“What if the AIs kill us?!” It’s already not that hard to make a machine that can kill you; why is a robot doing it somehow more scary? I suppose because you don’t understand where the “impulse” to kill is coming from. And anyway, if we’re smart enough to create robots that can actually decide to kill in some humanly way, then we’d naturally understand where that decision comes from in the first place and would prevent it (or override the capacity to decide not to kill if we’re making an evil robot army I guess).

(Of course some AI research is perfectly happy to stay within the bounds of mimicking and automating thought processes, as these algorithms can have useful applications, such as handwriting recognition software or my own forays into algorithmic music generation, which is ultimately music theory research.)

And let us not soon forget the actual screenplay written by an artificial neural network:

And the Oscar goes to…

In Search of Strong AI

While trying to work on my novel, my mind sometimes turns to mush and I can’t think creatively, at least not in the way that novel-writing calls for. So I began a journal with which to chronicle my thoughts and explorations as I search for Strong AI. I would love to live to see Strong AI achieved; who wouldn’t?

My short term goal, however, is to create a computer program that can teach itself to play chess (or any rule-based game) in such a way that we can observe the rules that it learns. As far as I know, no one has achieved this. Chess engines focus on number-crunching algorithms, using the computer’s ability to calculate quickly to its advantage rather than trying to simulate how a human would learn things. But if we can figure out how a human learns the game, I think the algorithms involved would be far more useful to advancing human knowledge than number-crunching algorithms created specifically for the game. I want an algorithm that creates algorithms.

Anyway, I have written up my explorations so far in my new little journal. You can download a PDF of the journal here. It’s a bit too long and clunky to post as a blog entry. I hope that as I continue to explore the subject, I will write and upload more journal entries.

Not sure anybody else out there is interested in the subject, but I’ll put it out there in case anyone is curious. Join me, and together we will rule the world.

InSearchOfStrongAI-Part01.pdf

In search of the best idea ever

Last month, or the month before (it’s all a bit of blur), I started programming what I thought could be a general purpose AI engine.  And it works!  It can find any pattern that is computational, and thus solve any computationally defined problem.  But it’s unfortunately completely inefficient for most interesting tasks.  If it wanted to learn to play chess, it would try to solve the entire game.  While mathematically possible, it would take far too long to compute and take up way too much memory to be of any use, what with combinatorial explosions and all.  And I don’t even know how to define a creative task, such as drawing or storytelling, in any computationally useful way.  So I really didn’t achieve much.

But the seeds of obsession were planted.  How does the human mind do it?  What am I missing?  There must be an answer, because humans do it.  This is the “AGI problem” – AGI standing for “artificial general intelligence” – the elusive AI system that can do anything, not just model a solution to some specific traditionally-cognitive task (which is what most of the “AI field” focuses on).

While I knew nobody had the answer (at least not that they’re revealing, otherwise we’d be living in a very different world), a trip to the bookstore seemed like a good place to start.  And there I found David Deutsch’s recent book: The Beginning of Infinity: Explanations That Transform the World.

tboi

It’s a fascinating book, one of the most fascinating books I’ve ever read really, even though it doesn’t give me any of the answers I’m looking for (Deutsch obviously makes no claim to have solved the AGI problem).  At the heart of it, Deutsch argues that it’s our human ability to create explanations that gives us the ability to think about all the things we do and make the sort of progress we do.  Of course, we’re still left with the question: how do we create explanations?  How can we program computers to do the same?

To quote Deutsch from this also fascinating article:

AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.

So I’m in search of one of the best ideas ever.

Computer plays Jeopardy, but does not know how to love

According to this article on Engadget:

So, in February IBM’s Watson will be in an official Jeopardy tournament-style competition with titans of trivia Ken Jennings and Brad Rutter. That competition will be taped starting tomorrow, but hopefully we’ll get to know if a computer really can take down the greatest Jeopardy players of all time in “real time” as the show airs. It will be a historic event on par with Deep Blue vs. Garry Kasparov, and we’ll absolutely be glued to our seats. Today IBM and Jeopardy offered a quick teaser of that match, with the three contestants knocking out three categories at lightning speed. Not a single question was answered wrongly, and at the end of the match Watson, who answers questions with a cold computer voice, telegraphing his certainty with simple color changes on his “avatar,” was ahead with $4,400, Ken had $3,400, and Brad had $1,200.

This is kind of interesting because what makes a computer good at Jeopardy is the opposite of what makes a human good at Jeopardy.

A computer can easily store vast amounts of data, but cannot so easily process human language.

A human can easily understand language, but we can’t easily store vast amounts of data. After all, the entire point of Jeopardy is not understanding the question, but knowing data that most humans don’t use in every day life.

So I think the real achievement here is in language processing — being able to output a specific answer based really only on an incoming string of letters (or maybe sound waves).

It’s easy to understand how such an achievement could be useful: imagine being able to type a question into Google and getting a direct answer (or at least a direct guess) instead of just a bunch of webpages that make you search for the answer yourself. Even though searching for the answer yourself doesn’t always take that much time, getting a direct answer would be much more convenient. Or imagine being able to speak a question into your phone or your car’s dashboard while driving, when you can’t browse the web without risking death, and having it speak back a direct answer. Imagine being able to cheat easily while you’re playing trivia games with your friends who are judging your intelligence and value as a friend based on how many useless random things you know.

While this would be nice technology for us to have, it still doesn’t have the power to create so much, does it? When will we have computers that can formulate their own sentences? That can write metaphors? That can write entire books? I guess we’re still too far away from that…

Anyway, if the computer wins, I say it should take over Alex Trebek’s job. I mean, what does he get paid for anyway? He just stands there and reads stuff. Computers can already do that. And besides, he still has his life insurance spokesperson job to fall back on.