January Musings

Eldiren here. Happy belated New Year everyone. With the New Year I've gotten a refreshing start on the Lumiere Rig Tools plugin after a bit of a hiatus and some bugs with the rig itself. The plugin is now complete, and is mainly waiting on me figuring out shop quirks with the website here. As always it's an exciting time to be in 3D. My previous musings made mention of Isotropix Clarisse. The team over there seems to be doing great things. That being said there's an even more exciting development for Cinema4D users in the form of C4DtoA, Solidangle's new plugin for Cinema 4D, that allows you to use Arnold natively in the program. Looking at some of the examples: Posted recently this month over at CGSoceity, it's shaping up to be awesome. My recent tutorials have focused on Alembic workflows, and the interesting thing about that, and the new Arnold plugin, is that it prevents the need for me to consider external packages like Clarisse and Katana at the moment. My studies and projects are always leading me in new directions though and I'll keep you up date on new developments. Going back to the plugin, the current feature set will allow you to merge Genesis weights onto the custom built Lumiere Rig template in Cinema4D. This rig includes full IK and in the future I'll be adding face controls and more functionality. Once your Genesis character is retargeted you'll be able to animate it like you would normally expect in a program like Poser or Daz, taking full advantage of all of Cinema4D's powerful timeline tools. Having the rig in Cinema4D also gives you access to deformers that you can see live, as well as having you character interact live with fluid and particle simulations. The second part of the plugin functionality allows you to retarget motion capture onto the rig from external sources. This allows you to continue to use Motionbuilder to define basic movements quickly, and then then take it into Cinema4D for further development. Currently as long as the bones in another program match the rig you should also be able to import data from elsewhere like Endorphin, or even Mixamo. I intend to enhance the retargeting and weight capabilities in future versions. Well that's it for me today. Expect and announcement, and a tutorial soon on the Lumiere Rig Tools. Eldiren out!

August Musings

Eldiren here. It's been quiet on the Logiciel front since March, but that's because life and education has forced me in different directions. As always I'm trying to figure out the fastest way to approach a task, and to that end I'm always researching the latest and greatest software for the best solutions to problems. The big problem I've been having for about a year had to do with rendering hair, and getting a fast workflow that involved hair, volumetics, and other rendered materials in a photorealistic way. As evidenced from my tutorials Daz makes it incredibly easy to get to the ground running on character creation. MotionBuilder is a quick tool to animate bipeds and props with. Faceshift allows us to quickly get convincing face character animation. After Effects and Premiere will put together the final shots, and take care of any blips in the road. My rendering and modeling package of choice, Cinema 4D has been lacking in essential capabilities to produce the fast super real images that I'm looking for though. Now I've looked at Maya, and what always threw me off about it was some of the quirks it has when doing the littlest things. I can chock part of that to me, however, because even in C4D I produce geometry that is a little messed up when modeling. The thing is I never got far enough in Maya to really run it through its paces as far as animation and dynamics. I realized the error of my ways in the last few weeks about that little point. More on that in a tutorial I think. C4D is still my darling though, and frankly Maya suffers from similar issues as C4D does when it comes to the final render. They are both just too slow for an iterative process of texturing and lighting. That fact is you have to hit that render button to see what the final look is going to be, and even though Maya's viewport 2.0 is stellar and C4D as always has a lovely viewport, it’s just not enough. Enter my research into The Foundry's Katana which aims to solve the problem of long look development and lighting cycles. Katana allows an artist to create 'recipes' which can then be applied to characters and assets in multiple shots quickly. It also allows you the dial in these looks quickly and efficiently. The only problem with it is the requirement to dedicate a computer to Linux during its use. Most of my apps are on Windows and my baby C4D isn't Linux compatible anyway. This makes Katana's workflow needlessly difficult for my studio. So, doing a little more research I found one program, and only one, that is a direct contender to Katana's very unique foothold in the 3D market. That program is Bakery Relight. Now while Relight is really good, and you definitely should have a look at their site because the way it does it's workflow is quite intriguing, during my tests I ran into crashes left and right. Also because of the specific way I handle my exports from C4D I wasn't getting much needed info to directly texture objects the way I wanted. Other things weren't working well also like their guided hair geometries, which was a big deal for me because I was still looking for a rendering system that could handle hair interpolation well. I almost thought all hope was lost. C4D's support for Renderman is woefully inadequate, and the new Pixar Renderman won't be free for non-commercial use till a few months from now. I've yet to dabble in Arnold, but it's making big waves. My biggest issue, of course, is diving into Maya and dealing with that pipeline. Certainly more tools are there, and in fact I am using a lot more of them now thanks to some other new stuff I picked up the last few months. The big point is though, that a Katana style workflow just makes sense. Animating your models without texturing them, tweaking the dynamics, and doing morphs and keeping the look and lighting stuff out of it just makes sense. We should save that stuff for a program that is optimized and prepared to handle it gracefully, and use our worhorses (C4D, Maya, 3DS Max, etc.) for the things they are good at. Traditional look dev is fine for the short projects I teach currently, or for quick parametric workflows like most motion graphics, but for the scenes I want to make I need something that can handle complexity quickly and easily. So I finally found my renderer, and boy is it amazing. The post image was actually rendered in it. The reason I didn't mention it earlier as a direct competitor to Katana is because it does a combination of things in a way that I've never seen done in any 3D program ever before. It truly is in its own niche. Enter Clarisse iFX, a hybrid rendering, animation, and compositing package that pretty much solves all of the problems I mentioned in this article. So anyway, I've been doing a lot, making a lot of renders and playing with a lot of lighting and look development stuff, as well as dynamics packages. I'm planning a few big tutorials to finish up the overall multi app pipeline and Clarrise will definitely be included. Eldiren out!

Renderfarming & Camtasia Issues

So tonight I’ll finally have the last tutorial up for the Painting & Detailing Intro tutorial. My main issue was with Camtasia. For some reason, on my main machine it was having an issue with getting any speed in rendering in this final part. Was it a broken project file? Was it just the machine? Was it the footage? I recorded it all the same as usual, so I couldn't pinpoint it, but I did I go through all the usual troubleshooting steps. Ultimately I didn't want to render on my Macbook Pro because I figured the time difference would be too great, but considering the issues and time to market, I ultimately gave in. I’m not sure how long it took to render, but it was reasonable, and I finally got the project out. So sometimes in production you can’t get caught up in the troubleshooting too hard, sometimes you just have to immediately go to plan B, or the B machine and take the supposed “hit”. Anyway I think the final chapter went well.
As for the new series, once again in this one I teach you how to step away from the original Daz 3D texture we got in Importing from Daz, and move up to a hand painted original texture using some resources from the net and some interesting tools from BodyPaint, and Zbrush’s Spotlight. Other topics covered include a detailed breakdown of Cinema 4D materials, multi UV tile workflows in Zbrush and Cinema 4D, more instruction on selection tags, and a breakdown of the Riptide Pro plugin. Here’s the chapter breakdown:painting_workflow_still_1[1]
  • 01. Painting & Detailing Intro
  • 02. UV Editing and materials in Bodypaint
  • 03. Exporting for Zbrush
  • 04. Importing into Zbrush
  • 05. Alternate UVs in Zbrush, and cleanup in C4D
  • 06. Painting and adding detail with Spotlight
  • 07. Exporting back to C4D
The training caps out at about 4 hours and 59mins. I hope you enjoy it as much as I did recording it. Moving on finally, the next training I want to do takes me back to some of my 9-5 work I used to do in the military and government. In this one I’m going to teach you how to Renderfarm. I may or may not have mentioned it previously, but with this new television production I’m doing I've run into an inevitable problem many 2d compositors and 3D animators are going to face. I can’t get preview or HQ content out to the client to review or release fast enough. A lot of the production has them pushing the schedule far back to accommodate the render times. I’d been looking at render farms for some time because I realized on even my most simple of animated scenes, that especially with GI, it would take 6 months to a year to produce a 120 minute film. I seek to do a regular 24 minute or so animated productions, so that’s a little unacceptable. This particular television production takes 6-8 or so hours to render on my LGA 2011, with the Core i7 3820. It mostly consists of After Effects projects mingled with Premiere Pro, using some interesting plugins like Video Copilot’s Element 3D, and stuff from Red Giant. Not to mention it also uses Keylight for some green screen stuff. These plugins can be relativity intense, and After Effects embedding in Premiere is certainly a speedy convenient workflow, but you pay in render times and some real time editing. Real time editing is another bottle neck I’ll discuss in detail later when I've got that toned down, but the point is I’d like to get footage out to the client at semi decent settings for review much faster. To do so I can’t beef up my machine much more. I’m using just about all the latest and greatest. I could upgrade to the hex core, or even get a production server, but a friend of mine has a dual socket Xeon Hex Core beast like that, and things really aren't too much faster. You need to bring more machines into the mix, and split the load. So I finally fell on Smedge. Like most renderfarm managers the interface takes some getting used to, but in terms of operation it works well. It’s the first farm manager I've used that was able to easily find all my cross platform machines and get jobs going nicely. It also has a quick easy setup and easily allows tweaks to the network flow to achieve a better throughput. For this next tutorial because of the technical nature of renderfarms I’ll be able diving into my Network Admin bag and pulling out some tricks of the trade from there. As usual I’ll be very detailed every step of the way, giving you the how and why of each operation. I’m looking forward to this one and I hope you are too. Eldiren out!

Motion Capture

Working on a 3d Movie project for a client's Hollywood submittal project I've finally had the opportunity to play around with multiple Kinect capture using iPi Mocap Studio. Initial setup was interesting. Once again finding quality tutorials was slightly tough. I managed to find a good series at YouTube by Jimer Lins
It actually led to the inadvertent discovery of Source Film Maker by Valve, which is a very interesting tool. It basically allows you to take a game environment and manipulate it for film. Doing a little bit more research on it, but it sounds as if you can set up a scene directly from actual game play to get easy camera angles and movement, and then further manipulate characters in real-time to achieve presentation and composition pleasing to you. While not vastly different than Motion Builder for composition it does seem like it might have a more intuitive interface.
I digress, however, two camera motion capture is pretty awesome because it allows you to capture full 360 degrees of body movement. I used it recently to capture a cell phone scene. While this seems a simple enough thing this was an aggressive cell call. That meant lots of arm motion and pacing around. 180 degrees of freedom would have restricted my ability to realistically act in my makeshift studio.
Bear in mind a two Kinect studio, especially without near mode provided by the Windows Kinect means you need lots of space, especially for more intense motions. My room still leaves a lot to be desired, but my basement might work considerably better with a bit of cleanup. Either way, I'll be doing my own tutorial of the setup soon. I still have to get the texturing tutorial out, haven't forgotten about you guys. Eldiren out!

Late tutorials, and scene woes

Hey everybody, Eldiren again. So I've been working on the Painting and Texturing Intro tutorial series this week (one week later than I promised), and I've run into just a few issues with it. This happens with most tutorials of course. Generally when I record these I write some general notes describing what I want to talk about, and then depending on how rusty I am with the subject matter, or if it's a new technique that I'm just trying out myself, I typically go through a run through of the process with a copy c4d file. Usually though I like to go raw, doing the tutorial as I go, and working through the flaws on screen with you guys. I ran into a unique issues with NitroBake in this particular tutorial, because it doesn't preserve the selection tag names quite well. Also because the model was broken up in the Softimage workflow it's hard to simply copy drag some selection tags back. I'm working that part out now so the tutorial flows seamlessly. Should have it up for Monday I hope. Anyway, in case you missed it the Softimage Face Robot workflow tutorials are up on the Logiciel Lumiere Channel.
http://www.youtube.com/watch?v=Ey3O09h3spc In that series I cover Cinema 4D and Face Robot integration and workflow pitfalls and concerns. I also cover Faceshift which just released an update a few days ago. One of the big things in the update is lip-sync. The tool was already great for fine facial capture, but now I imagine the lip movements will be even more believable. I think I'll do a advanced series next where I cover a direct Faceshift workflow as the program does allow you to import FBX models and work with them. Looking forward to that. Eldiren out!