It’s been a little whilst since we last posted an update on here as to how the game was progressing, particularly related to the topic of graphics. The in-game sprites are progressing well and Friday will most likely see the inclusion of yet another Character Friday to the blog. We are in fact nearing the end of the character updates, but that does not mean that we are finished with generating the assets. It will just signify that the majority of character design is complete.
We have a large number of significant characters and that in itself has been a challenge. It was also key to the story and something which wouldn’t have worked with a smaller cast. Recently though, the backend development team, including myself and sbx34 have shifted our attention away from scripting and coding, and focussed on something much less to our strengths. Background art.
We did post a while back that the background art was in progress. Due to circumstances beyond her control, our BG artists time has been very limited. The other problem was that our demand for BGs only seemed to increase. Mix a decrease in resource with an increase in demand and you are left with a recipe for disaster.
So we were left with a choice. Find another artist, use photos, or find some other way to generate backgrounds.
The first was not really an option. For this game we wanted to keep the staff a) local and b) in the family, so to speak. The second didn’t leave us with a good feeling. When you are putting beautifully drawn characters over photos, no matter how well they are manipulated in Photoshop, it never quite seems to hit the mark.
So option three, find some other way to get backgrounds was chosen. Given that it was the most difficult of the three we weren’t hopeful of a solution. I had spent a reasonable amount of time looking into the prospect of using CGI for the backgrounds, but hadn’t had great success finding something that fit well with us.
Blender was the obvious choice for the modeller and renderer, but making the models look toon shaded, or like they had been drawn, was difficult. We found out about the Freestyle addition to Blender, but me being me, I wanted to use an official Blender build and not have to worry about compiling the darn thing every time we wanted to upgrade to fix a bug and ensure that both sbx34 and myself were in sync.
Looking around further I found the blog of StudioLLB. In there, Light described a way they had put together to detect and draw edges using a compositing technique. I downloaded the files and had a play and was suitably impressed with the results. After a little tweaking, we had a test scene which we were happy with. It wasn’t perfect and still had some artifacts on it which needed to be removed, but overall it seemed to function well. Some early feedback from some Ren’Py IRC channel users who are close to the team suggested to us that we were pretty close to hitting the mark. The backgrounds worked and were accepted. This was a huge step.
We have a reasonable number of models from another project we worked on, but there are still a few scenes that we are going to have to model ourselves, unfortunately these are going to be the most difficult, namely the two convenience stores themselves.
Below is a screen shot of our modified version of StudioLLB’s Edge Detection Technique. Once we have tweaked a few things a little further, I would like to release a Blender file to allow others to piggy back off the technique. We added a bloom type filter into the mix to give the dreamier feel to the scenes. As a result of my technique for doing this, it increases the contrast of the image dramatically, so we compensate for that by adding a levels node to adjust the contrast. In the top right hand corner of the setup is a simple switch allowing the CG artist to turn the entire compositing block either on or off. As compositing adds extra render time, it allows the artist to work quickly but still flexibly see how the end result will look.
The next screen shot shows a locker scene. The models were from a set which we had purchased previously and all we really did was to light the scene. This required little effort and the image has had no post processing done in an external application. Everything you see here was done in a single render pass with compositing included. It shows both the outlining from the EdgeNode system from StudioLLB and the bloom effect that we added in ourselves. The scene took probably about 20 minutes to light. There are improvements to be made. We are under no illusion that it’s perfect, but it serves as a good enough test to prove that the technique is successful. I should point out here that there are no textures used in this at all.
We now have a classroom scene. Again, the models were provided from a third-party set which we have purchased previously. sbx34 was responsible for lighting this one and we made use of the volumetric lighting to give the scene that dreamy anime look which I’m personally a real sucker for. We’ve lightly textured some of the items in the room, but not so much as to make the useage of CG overly obvious. There are actually three lights in this room. One is a hemispherical light to give an ambient level so that things which are in shadow are not jet black. The other two are coincident and are used for the sunlight and volumetric effect respectively.
The last scene is unfinished, but is the beginnings of the outside of the Yuudai-Mart store. This is our own model. I’m responsible for most of the modelling at the moment and we will see this scene unfolding as we move forward. Putting tons of products on the shelves is going to prove troublesome, but hey, we’ll give it our best shot. You will notice in this render there are some artifacts on the roof. These are as a result of the edge detection routine in the EdgeNode system, but with a little tweaking, we can remove them.
So please, give us your feedback on how you think these early renders look and let us know if you are interested in learning more about the technique once we have finalised and tweaked it.