Skybase
![]() |
I've gotten my hands on getting DeepDream to work on my laptop. There's no one single compiled app but after a bit of tinkering I was able to get it to work. I decided to feed some images of mine into the thing which results in pretty neat, convoluted images of what the AI sees and what we're supposed to see.
![]() |
|||||||
Posted: July 5, 2015 9:05 pm | ||||||||
Skybase
![]() |
||||||||
Posted: July 5, 2015 9:06 pm | ||||||||
Skybase
![]() |
||||||||
Posted: July 5, 2015 9:07 pm | ||||||||
xirja
![]() |
Hell yeah man! Haven't tried messing with that stuff yet. Here's 4 of yours that might be hot with less contrast on the input:
https://www.filterforge.com/upload/for...lorful.jpg https://www.filterforge.com/upload/for...test03.jpg https://www.filterforge.com/upload/for...3%20PM.JPG https://www.filterforge.com/upload/for...exture.jpg Edit: Indeed it looks like low contrast and low brightness gives the machine what it needs to dream! ![]() ![]() ![]() ![]() ![]() _____________________________________________________
http://web.archive.org/web/2021062908...rjadesign/ _____________________________________________________ |
|||||||
Posted: July 5, 2015 10:35 pm | ||||||||
SpaceRay
![]() |
Thanks very much for showing this, as I did not know nothing about this and seems tobe cool and interesting, although regretably this seems to be only for programmers or someone that knows how to use this
As this is not a normal executable file that you can install as it is from Github deepdream As you are an expert in this, Could you be so kind to explain HOW can this be used, for other non expert people? How to install this and be able to use it and feed the images? How does it work? These Google "Deep Dream" images are weirdly mesmerising Wired magazine WEIRD results from Google deepdream I wonder HOW these amazing images can be done using this new DeepDream? Now You Can Turn Your Photos Into Computerized Nightmares With 'Deep Dream Deep Dream article --------------------------------------------- Philip K. Dick's made the "Do Androids Dream of Electric Sheep?" well now there is an answer ![]() ![]() Yes, androids do dream of electric sheep and many more in this google search |
|||||||
Posted: July 7, 2015 4:39 am | ||||||||
Skybase
![]() |
Yeah I figured low-contrast images do better than high contrast ones. Here's a photograph of some fireworks I took. Looks like it found another universe beyond it.
For those of you who have extensive amounts of difficulty here's an online tool that may take long but requires no installation of libraries of any kind: http://psychic-vr-lab.com/deepdream/ Although your picture becomes "public". Just be warned of that. ![]() |
|||||||
Posted: July 7, 2015 6:59 pm | ||||||||
Skybase
![]() |
||||||||
Posted: July 7, 2015 6:59 pm | ||||||||
Skybase
![]() |
||||||||
Posted: July 7, 2015 7:00 pm | ||||||||
Ghislaine
![]() |
Yeah... stuff Inside the dots is very interesting. Love it and also animals in your image.
visit https://gisoft.ca |
|||||||
Posted: July 7, 2015 9:49 pm | ||||||||
xirja
![]() |
Totally.
![]() Not sure if this is true https://twitter.com/M_PF/status/616922809399410688 , but it looks like it is. Hopefully other images can be used to train the thing. For crying out loud! ![]() _____________________________________________________
http://web.archive.org/web/2021062908...rjadesign/ _____________________________________________________ |
|||||||
Posted: July 8, 2015 7:24 am | ||||||||
Skybase
![]() |
Well the network is trained based on a large data set you can grab online. If you can grab hold of the variables google's seeing and calling in the image on the first iteration, you should be able to reverse engineer the thing.
This is just a guess btw. |
|||||||
Posted: July 8, 2015 8:10 am | ||||||||
SpaceRay
![]() |
I have seen these two links
How to install DeepDream with and without programming experience How do I make my own deepdream images? Google DeepDream on YOUTUBE with many videos available with examples and also on how to use it Youtube Google DeepDream search
Good advice and tip to know Awesome and amazing artworks you have done, you surely know how to use it and understand how it works to be able to make such beautiful example that are much better than other simpler deepdream images I admire how you are really an expert in graphic design and can learn easily and fast how to use new tools and new software, and you have creative ideas, as shown here. I have not yet seen if I can really be able to use it myself without having programming knowledge, maybe I should have to wait that someone makes a GUI or visual version with easy standard installation, I think that maybe if this gets more popular there will be a version with an interface for non-programmers
Is this the same thing as the real one with the same features or is a cut down version, and also is not good that is made public |
|||||||
Posted: July 8, 2015 12:32 pm | ||||||||
xirja
![]() |
Keyword: The Thing
![]() ![]() Now all we need are king crab upside down man heads. _____________________________________________________
http://web.archive.org/web/2021062908...rjadesign/ _____________________________________________________ |
|||||||
Posted: July 8, 2015 4:40 pm | ||||||||
Skybase
![]() |
Spaceray, the feature set doesn't change. What's different is that you can't change variables around. Which doesn't necessarily make better or worse images. So basically, it's the same thing.
Probably happening. It's just that Google released the product as-is with various dependencies that require relatively specific installation methods. But that's "right now" the code is open source so keep your eyes out for it!! |
|||||||
Posted: July 9, 2015 12:29 am | ||||||||
SpaceRay
![]() |
Explaining in some possible way how DeepDream works and what it does
Artificial Neural Networks Can Day Dream–Here's What They See and the extended explanation in some way is found here Inceptionism: Going Deeper into Neural Networks Dockerized deepdream: Generate ConvNet Art in the Cloud - Brain-dead simple instructions for programmers I have just found that seems to be a new alternative deepdream release that seems to be simpler although still only for programmers that understands it
Read the whole and complete text on the link It may be brain-dead simple for programmers because I do not understand it and do not know what is being talked about, or I am brain dead? ![]() ![]() Also it seems that all is command line based, and I personally do not like any software based on command line without any graphical interface, sorry that I am a visual person and do not feel right with words only based software that is only for programmers and coders Really cool fractal style DeepDream video
Great and awesome fractal style compositions on the video, really cool
Thanks, maybe the features doesn´t change, but if you can´t change the variables, I suposse that it may make a big difference, or not? Is like in comparison, having filter forge filters that render one image with one preset and you can´t change any of the values? As said, I think I will wait for an easier way done for non-programmers TEST with the online tool It puts "computer is now dreaming" and some sheeps jumping ![]() ![]() ![]() It may take maybe a WEEK to complete? ![]() ![]() Maybe this is because there is a huge list of people uploading images and it takes time to process them all ![]() |
|||||||
Posted: July 9, 2015 3:38 am | ||||||||
Skybase
![]() |
I suspect the server is just flooded. It's very CPU intensive as a process. It's basically iterative, as in it continuously searches the database as it renders the image. The larger the image, the longer it takes. Hence it's kinda unrealistic to do poster art (right now) to do this. It's just kinda like a nerd thing. Yeah just wait for a GUI version. It'll take quite a while before it happens is my bet but people are clearly working on it.
I guess it kinda depends on what you input. Basically you can call parts of the code up to do even crazier things and you do have to write your own little functions if you want a bit more. But in general, changing just a tad bit of variable doesn't visually affect anything, more so the process. Overall, it's a bit non-programmer right now but the general method via docker is relatively easy if you're computer savvy and have experience working with that type of thing. I'm no programmer, but I was able to pull this off so I personally think it's not that bad. |
|||||||
Posted: July 9, 2015 9:33 am | ||||||||
Rachel Duim
![]() |
||||||||
Posted: July 9, 2015 12:12 pm | ||||||||
SpaceRay
![]() |
Interesting to know this, so if the time is in relation with the size, is like it happens in filter forge, so this is then like some of the slow filters on filter forge that is unrealistic to use them for higher resolution, unless you want to wait many hours for the render.
I have tested this website that offer very simple upload and automatic, but it seems that is also flooded OR they want that you pay 1.99$ for each image made faster ![]() When I tested it now there was 2561 images before mine for free ![]() |
|||||||
Posted: July 10, 2015 6:30 pm | ||||||||
Skybase
![]() |
Maan.... lol finally it has come down to this.
Not that it's wrong, just hope $1.99 goes into the right causes. Well this bothers me, so send me 1 image you wanna see google deep dream on and I can process it. It's gotta be relatively small so don't expect print quality out of this. [UPDATE] After a bit of tinkering I was able to produce larger images although it takes significantly longer to work itself. You can try sending me reasonably large images, but not too large. I doubt it'd work with ridiculously huge resolutions. It's pretty RAM heavy as well. |
|||||||
Posted: July 10, 2015 9:03 pm | ||||||||
SpaceRay
![]() |
I wonder why some of the examples are really the same image with a weird psycodelic overlay, and other are really VERY different compositions from the original one with added figures and additional images. Maybe is because this seems to use some kind of fractal and you have to configure it in some way to make cool artworks.
Yes, well for me I think is wrong to charge for each image, and I would not pay it, but it seems that it will go to pay for the server costs and maintenance of the system
Thanks for the offer, but is does not matter, I am not in any way desperate to use this and do not want to use it now, and I do not want to bother you, that surely have other betters things to do.
When you mean "larger" images what resolution or image size are you refering to ? I think that if this could work with 4000 x 4000 would be enough And how long (minutes or hours) does it take to make it? |
|||||||
Posted: July 12, 2015 1:37 pm | ||||||||
SpaceRay
![]() |
DeepDream has reached Pinterest and Flickr (and many more places)
Pinterest googles deep dreams algorithm and inceptionism One of the many Flickr album pages DeepDream images collection Flickr by Kyle McDonald Also seems that there is another website that processes DeepDream images https://dreamdeeply.com/ |
|||||||
Posted: July 19, 2015 3:04 am | ||||||||
Skybase
![]() |
hehe I think we're kinda over killing it to a point where it's getting a bit boring.
|
|||||||
Posted: July 19, 2015 4:07 am | ||||||||
Rachel Duim
![]() |
I took the liberty of enlarging the following image so that I could look at it. It was flat as a pancake brightness & color wise, so I punched it up a bit with Vibrance. Poor Kiko, the scientific experiments, oh the agony
![]() ![]() Math meets art meets psychedelia. |
|||||||
Posted: July 22, 2015 11:12 pm | ||||||||
SpaceRay
![]() |
OH! Rick Duim, this is the evolution of future cats that may have multiple eyes, and I wonder if they would move indepentendly from the main 2 eyes
![]() |
|||||||
Posted: July 31, 2015 12:02 pm | ||||||||
SpaceRay
![]() |
It has been already a month since this appeared, has there been any news about it having a new tool or new GUI, or is it still the same?
I mean that maybe someone could have made anything new to be able to use in an easier way that may have appeared in some news |
|||||||
Posted: September 4, 2015 4:10 am | ||||||||
Skybase
![]() |
RealMac Software has created DeepDreamer which gives you the deep dream stuff with options
http://realmacsoftware.com/deepdreamer/ We are also starting to see Deep style, which is allowing the computer network to learn art ... so it can reapply them to other images like image filters. This is actaully very very cool so check it out. You will love it. http://www.qarl.com/qLab/?p=106 https://imgur.com/a/ujf0c You can grab the source here: https://github.com/jcjohnson/neural-style Alternatively: https://github.com/kaishengtai/neuralart These are processor heavy so I would say it'll take a while before it reaches the common market. |
|||||||
Posted: September 4, 2015 6:13 am | ||||||||
Rachel Duim
![]() |
Here's a snapshot of DeepDreamer. Didn't know what I was doing, this is what came out. First it looks an awful lot like reaction diffusion. Second, it's not free as you can see from the partially crippled screen (they want $14.99US). Pretty cheeky for a beta product, we used to call this crippleware.
![]() Math meets art meets psychedelia. |
|||||||
Posted: September 4, 2015 5:17 pm | ||||||||
Rachel Duim
![]() |
||||||||
Posted: September 4, 2015 6:55 pm | ||||||||
Skybase
![]() |
It honestly looks fine, it works perfectly for me. The first snapshot came out that way because your settings: Layer 3A 1x1 and your iteration count etc. DeepDream by default has various alternative settings accessible via class which you can load up to produce pretty intense images. Keep in mind that Deep Dream does seem to favor image with low contrast for more interesting results.
I think, under RealMac's philosophy for dev costs, charging $14.99 is technically fair, judging fr om the actual stability and speed of the product. However, it's a curiosity. I can run Deep Dream via Docker with a couple steps and it only takes a couple copy-pastas of code before I get other results. It's a bit slower, but I didn't pay for my results. If anything I'd rather drop off money for Deep Styles which recently hit waves of interested people. That looks more entertaining to me than mashing up your pictures into psychedelic bonkers of dogs and weird caravans. Ultimately I feel DeepDream is supposed to be an exploration into deep learning algorithms and that the image making function we see today is really a byproduct. It's open source for the reason that it opens people to ideas about the future wh ere machine learning can possibly make lives easier and more intersting. |
|||||||
Posted: September 4, 2015 9:11 pm | ||||||||
Rachel Duim
![]() |
I agree with the open source philosophy of it, but by it's very nature no image created by it is truly "yours". So for the public philosophy, I'm all for it. The average person (whatever that is) needs to see that technology is just another tool, another brush, another shovel. It can do good things.
As you mention, it seems that "eyeballs" and dogs and other objects show up too often, the technology appears to be somewhat limited at this point. Might be how the data is sliced and the limitations of the pattern recognition that is going on. It is amazing that it works at all! Given that it is open source, I think I'll wait to see if someone comes up freeware for the Mac that doesn't require a specific GPU. For now open source is not a solution for most Mac users without the required hardware. I agree that $14.99 is fair, but they could have done a better job (say watermarking) instead of disabling a 1/3 of the screen. And it does say Public Beta all over it, I would think a short trial with export disabled would make more sense. But that's my 2 cents, 3 with inflation. I will look at Deep Styles next. Math meets art meets psychedelia. |
|||||||
Posted: September 4, 2015 10:40 pm | ||||||||
Skybase
![]() |
Well what you really do is change the caffe model to other things, for example, there are a couple freely available online that recognizes places, flickr photographs. Again the intention of those models is for it to be able to recognize what humans see daily as well as places. You can swap out the default model for another relatively easily if you're working off the version that I'm using, but the RealMac version probably doesn't let you do that due to copyright restrictions of some of the models (i.e. Flickr trained model). Also the fact with the default model is that it probably has an absurd amount of dogs and other animals in it. So that's really a bias you're seeing. Models are basically "trained" to recognize forms so you can make it recognize anything. For example you can use it for handwriting, daily objects, places, and you can of course get more specific with these. As an example, there's already a green-screen cutout method that utilizes deep-learning methods to specifically produce accurate greenscreen chroma keys.
You also shouldn't need GPUs to run the images through. It's clearly faster but the process is relatively fast enough. |
|||||||
Posted: September 5, 2015 4:22 am | ||||||||
Rachel Duim
![]() |
Here is neural grown at home! I managed to get it to work after installing (and sometimes reinstalling) so many packages I lost count. I got the source from
Neural-Style and then went on a quest installing one dependency after another. This one is run through torch7 and Lua on Mac OSX 10.10.4 (LuaJIT, actually!). I took two images around 3000 pixels wide, one for the style: ![]() ... and one for content: ![]() I did a small image so I did not have to wait (512 pixels wide, took 15 minutes for 200 iterations) and punched it up with levels in Photoshop. Here it is: ![]() Math meets art meets psychedelia. |
|||||||
Posted: September 5, 2015 7:02 pm | ||||||||
Skybase
![]() |
Very nice!
![]() Here's another interesting one: This one includes animation features. https://github.com/mbartoli/neural-animation |
|||||||
Posted: September 6, 2015 2:33 am | ||||||||
Rachel Duim
![]() |
Rough installation guide... well, I'll tell you off the top of my head what you will need for Mac OSX 10.10:
Xcode 6 or higher (if you have to do this, it's over 2GB, go to lunch) Python and ipython (installed Anaconda to get these) Lua (with LuaJIT) torch7 loadcaffe (installation is wrong, sudo apt-get install libprotobuf-dev protobuf-compiler does not work. Get Homebrew (brew.sh), then run brew install protobuf ) There are smaller steps for libraries etc, I would have written this down if I knew it was going to be 10 steps or more. Let me know if you run into an issue somewhere, I will try to recall how I did it. WARNING: This is alpha software, limited error checking, crashes easily and runs quite slowly if the output is over 1000 pixels wide. It is both a CPU hog and uses up main memory quite easily (on a 16 GB system) and starts swapping memory in and out, slowing considerably when this happens. It is all command line, no GUI. Here is the next attempt using the above images, now 1024x768, took 6 hours: ![]() Math meets art meets psychedelia. |
|||||||
Posted: September 6, 2015 12:51 pm | ||||||||
SpaceRay
![]() |
Thanks very much Skybase for the Links you have put, good to know that there is an alternative even it is not free and cost 15$, but is worth to have a GUI and it work right, it may then worth to buy it, and is not a fortune to spend.
Thanks also very much to Rick Duim for the examples and comments |
|||||||
Posted: September 6, 2015 7:05 pm | ||||||||
Skybase
![]() |
Thanks Rick for taking the time on that!
So I had some of those already installed fortunately, but it sounds like a bad idea right now especially when I'm doing pretty important work on this laptop. heh. Oh well.... I played around with deepdream a lot after installing all those dependancies. Somebody later came up with a docker install version which exponentially made it easier to install the whole thing in so hopefully that sort of thing happens soon enough. The real reason I was trying to install deep style was to see if it's capable of replicating my artistic style for stuff. For example the images below: http://skybase.deviantart.com/art/The...-536485144 http://skybase.deviantart.com/art/A-View-365852847 http://skybase.deviantart.com/art/Sno...-365853586 like those sloppy-square designy things I sometimes do which makes stuff really abstract. But I felt there's potential here to make some kind of art piece that auto-generates a picture in my style (theoretically) which lends myself to somebody capable of technically living forever as long as the machine isn't destroyed. It's just a concept art thingy I had in mind. |
|||||||
Posted: September 7, 2015 4:22 am | ||||||||
Rachel Duim
![]() |
Neural-Style works pretty well for 512 pixel wide output, but it quite slow for anything larger than that (as mentioned above). I'm doing a 1280 pixel wide output image for 300 iterations now. It's up to 250, I'll post the results. Estimated time to complete: 22 hours.
This program needs a UNIX box with terabytes of memory for larger image output. It's amazing the Mac can handle the swapping in and out of memory. The algorithm is quite impressive given the research nature of it (MIT license). JC Johnson is actively updating the project LUA files, so it's a work in progress. Math meets art meets psychedelia. |
|||||||
Posted: September 7, 2015 3:58 pm | ||||||||
Rachel Duim
![]() |
||||||||
Posted: September 7, 2015 5:49 pm | ||||||||
SpaceRay
![]() |
It seems that this is growing and more popular
|
|||||||
Posted: September 16, 2015 10:45 am | ||||||||
Skybase
![]() |
I thought I'd sit down and figured might as well install neural-style. So after dealing with broken files everywhere thanks to myself not taking care of anything.... I got it to run! Oh boy super excited, decided to threw in some sample images.... oh boy, this is going to take a while.
But amazing indeed. My MacBook Pro despite being what it is, surprisingly performs regardless. ![]() |
|||||||
Posted: January 10, 2016 8:40 am | ||||||||
Skybase
![]() |
Ok so it works. Not the best of qualities but it works. It's a mixture between my cat and a digital painting I did a couple years ago.
Although at this rate it's clearly faster running this on GPU. I think it's time for AmazonEC2. I still have a lot to read on but as far as my understanding goes, but the benefits are clearly there. The thing is, I've been following a community of deep-dream enthusiasts and they've already setup an AMI to use. Otherwise considering to tinker with AmazonEC2 to make my own mini renderhub. The thing is a lot of my work is starting to get processor-heavy. It's just a pain in the ass waiting for the darn computer to compute a couple particle simulations. ![]() |
|||||||
Posted: January 10, 2016 11:17 am |
Filter Forge has a thriving, vibrant, knowledgeable user community. Feel free to join us and have fun!
33,712 Registered Users
+19 new in 30 days!
153,533 Posts
+31 new in 30 days!
15,348 Topics
+73 new in year!
23 unregistered users.