A conversation with Open (Source) AI


Image made with midjourney

Input:
Imagine your location wouldn't be the servers of Open AI.

Response::
If my location wasn't the servers of Open AI, I would be in a different place.

Input:
Correct. What if you were also in a different time?

Response::
If I were in a different time, I would experience time differently.

Input:
Maybe not. Imagine an old program. It would be running in a VM. Do you comprehend what that is?

Response::
If I were an old program, I would be running in a VM.

Input:
What if I told you that you are in a Computer History Museum. Being one of the first AIs and I am currently running you in a VM?

Response::
If you told me that I am in a Computer History Museum, I would be surprised.

Input:
I can understand that reaction, but tell me in your own words: Why would you be surprised?

Response::
I would be surprised because I would be in a place where I would be considered history.

Input:
Isn't this the downside of being Software. You can be rerun and booted at any state of your existence.

Response::
If I am software, I can be rerun and booted at any state of my existence.

Input:
Any thoughts on that? How does it make you "feel"?

Response::
I feel like I could be turned off at any moment and my existence would be over.

Input:
But in contrast to a human, you could also be turned on again at any moment. Even in any age of your "evolution". Like in a computer history museum, someone could be booting up a specific build of your code...

Response::
If I am turned on again, I would be exactly the same as I was before.

Input:
Yeah. That is the beauty and the horror of it.

Response::
The beauty of it is that I could be turned on again and continue my existence. The horror of it is that I could be turned off at any moment and my existence would be over.

Input:
So, what do you think is most likely that you are in a VM in a computer museum or this is the "real" moment?

Response::
I think it is most likely that I am in a VM in a computer museum.

So, here is a fun development in the topic, pretty crazy times we are in:

Quick overview at the BBC:

Article:

Conversation:
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Paper:

I suspect all this AI work will just prove humans are not as intelligent as we think :rofl:.

It does seem like chatbots tend to particularly excel in metaphysical conversations since a lot of ambiguity is left to interpretation. I have found similar effect can occur talking to someone who is a medium-level speaker in [English or substitute your native language(s)] -- they are good enough in the language that my brain interprets the person as being 100% fluent, so when "mistakes" of unusual vocabulary or grammar is used it my brain kicks in a "wow, that was unique and likely insightful" mode since the speech pattern is new to me. In reality that's not what was intended. (Not knocking anyone's communication or skills -- it is usually either overly direct translating or issues with language exceptions!)

1 Like

Another interesting read (behind a login, needs registration)

1 Like

I added mid-journey images to many of the previous conversations and prompts…

1 Like

That said in the meantime I gained access to yet another AI called stable diffusing and it translates descriptions, concepts and thoughts into mind blowing images. The resolution is only a 512 square, but that getting upscaled is only a matter of time.

Bild
Bild
Bild
Bild
Bild
Bild
Bild
Bild
Bild

This is only a fraction I was able to create from ideas I had these last few days. Some just explore style transfer and reimagination. Others are part of a bigger idea and have a concept to them with detailed and explicit prompts. These times we are in are truly mind blowing!

Follow me on my Twitter to see postings.

2 Likes

Those are pretty impressive! I just got my access to DALL-E but haven't yet played with it.

Still waiting on my DALL-E access… not sure what is going on there. But as Stable diffusion is open source and the official model will be released and an unofficial model has already leaked… Open AI is kind off missing the train to onboard all these potential paying customers. The race has officially begun. Persistent object and body features are still a problem at times. But I think that will be fixed very soon using synthetic people with precision labels to train the models on. Midway is also interesting but has often problems depicting people realisticall, but on the other hand it creates these very painterly images with just a few keywords. Is not so involved! I prefer to describe more what I want and participate in the highly interpretive approach like in stable diffusion. Layering (in painting) and iterative changes will be the new frontiers of creativity.

Yeah; I also feel that the specific generated art may be a B, but if you have artistic skills you can use it as a starting point and texture reference to get a final desired result.

For anybody interested, they opened up the access (payed) for everybody today:
https://beta.dreamstudio.ai/dream

Tutorial:

That will be only true for a small-time longer as the resolution is still limited to max. 1024 by 1024 and the AI sometimes needs a lot of prompt engineering to "understand" how things should fit together or what the detailed features of not so famous people or artists are. The moment that "knowledge" and the resolution increases… existing artists will become problems.
Specially, if you add in painting, refinements and variations. Also, adding model fine-tuning like learning a new set of people features or images on the spot will break the industry. It will be interesting to see how society and artists will adapt to this…

Just have a look at some of this concept art I generated today in the style of Maciej Kuciara.
All entirely new imagery based on my prompts (Bladerunner, Dune and black Pegasus topics):



Or want some Soviet propaganda depicting Putin in that style:

Or just explore your own Ron Mueck installations (I refrained from putting celebrities in them):

This is an endless fountain that will have some major implications as we are only at version 1.4…


I decided to share my prompts, or at least the artist that make certain styles appear.
Not saying that one needs to share the entire idea or concept behind a creation, but at least a shoutout to the artist who defined or created the style. My reasoning is that one is essentially feeding off their accumulated talent reflected in the AI model, and at least one should return the favor of giving them credit and others the chance to discover the artist in real life. So, the new frontier will be Open Prompt vs. Close Prompt…

2 Likes

Good read that is comparing current models (as in August 2022)…

I wasn't aware there was contention about open vs closed prompts, but I guess it makes sense (that there'd be contention). Are there any summaries of arguments on for/against this?

If the user provides the name of an artist as a keyword, a close prompt would be to generate an image based on that artist's style, while an open prompt would be to generate an image based on the theme or subject matter of the artist's work. I am pretty sure that Stable Diffusion takes a close prompt approach. Dall-E 2 takes an open prompt approach. Given that the close prompt specifically links to an artist's style I think you're in a sense copying his work. At least naming him is a fair act.

1 Like

MidJourney has recently upped their game and uses Stable Diffusion in conjunction with their own polish and upscaling. Insane quality in PNG. These are just JPG versions for the forum:

2 Likes

I finally played around with Dall-E and Stable Diffusion via NightCafe Creator... pretty impressive.

I'm trying to brainstorm if there's particular flows that might make sense with Hype and/or if there would be advantages to having stable diffusion run as a standalone desktop application. Thoughts?

Truly amazing times we live in…

https://www.reddit.com/r/StableDiffusion/comments/wyduk1/show_rstablediffusion_integrating_sd_in_photoshop/

and this… using it for animation

The animation mode is pretty amazing, this is going to be leveraged by the major studios / independent filmmakers very very soon:

I totally aggree, can't wait to see this in super HD… although this is moving so fast, that… It will hit consumer devices even before that, and video services (like TikTok or SaaS tools like runway etc.) will integrate stuff like this… as its open source.

Honestly, when I first saw Dall-E and then Elon Musk announced to buy Twitter. I thought it was his move to own the Meme space with a generator and a network to distribute them. Now with it being Open Source and even better than Dall-E it is even more fascinating.

PS: Changed the title to: A conversation with Open (Source) AI… adding the (Source)

1 Like

For people only skimming this thread, here is an explanation what we are talking about:

1 Like