<?xml version="1.0" encoding="utf-8"?> 
<rss version="2.0"
  xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
  xmlns:atom="http://www.w3.org/2005/Atom">

<channel>

<title>Photoindra: posts tagged houdini</title>
<link>https://mail.photoindra.com/tags/houdini/</link>
<description>My telegram</description>
<author></author>
<language>en</language>
<generator>Aegea 11.3 (v4134)</generator>

<itunes:subtitle>My telegram</itunes:subtitle>
<itunes:image href="" />
<itunes:explicit></itunes:explicit>

<item>
<title>Plasticity to Houdini 21 recipe</title>
<guid isPermaLink="false">286</guid>
<link>https://mail.photoindra.com/all/plasticity-to-houdini-21-recipe/</link>
<pubDate>Wed, 10 Sep 2025 14:19:35 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/plasticity-to-houdini-21-recipe/</comments>
<description>
&lt;p&gt;I’ve been using a lot &lt;a href="https://www.plasticity.xyz/"&gt;Plasticity&lt;/a&gt; lately. It’s a simple CAD tool for surface modeling – much easier than Fusion 360 or MOI. I can do modeling with booleans and curves there much faster.&lt;br /&gt;
Then I import the geometry into Houdini, and usually my import process looks like this:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/plast_import_01.jpg" width="571" height="815" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="1"&gt;
&lt;li&gt;Align the axis to the only correct way: Y is up, Z points forward to the viewer, and X points right.&lt;/li&gt;
&lt;li&gt;Scale to meters.&lt;/li&gt;
&lt;li&gt;Use a Match Size node in case I modeled without real-world dimensions.&lt;/li&gt;
&lt;li&gt;If you name your layers in Plasticity they will come into Houdini as a “path” attribute, so you can easily convert them to groups with the Groups from Name node.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To make this transition as easy as possible I use the new Recipes system (added in Houdini 20.5). To share those recipes between my different workstations, I created a recipes.json file inside the Houdini packages folder with this content:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;{
  &amp;quot;hpath&amp;quot;: &amp;quot;path_to_your_cloud_folder/Documents/Houdini/recipes_folder&amp;quot;,
  &amp;quot;env&amp;quot;: [
    { &amp;quot;HOUDINI_CUSTOM_RECIPE_DIR&amp;quot;: &amp;quot;path_to_your_cloud_folder/Documents/Houdini/recipes_folder&amp;quot; },
    { &amp;quot;HOUDINI_CUSTOM_RECIPE_LIBRARY&amp;quot;: &amp;quot;custom_recipes&amp;quot; }
  ]
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In this setup, when you save a new tool as a recipe, Houdini automatically locks the Save To field to Custom File Path, pointing to:&lt;br /&gt;
path_to_your_cloud_folder/Documents/Houdini/recipes_folder/olts/custom_recipes.hda&lt;/p&gt;
&lt;p&gt;Here’s example of another recipe that helps process low- and high-poly meshes for baking in Marmoset:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/plast_import_02@2x.jpg" width="959" height="677" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;I’ll post more details about this project later, but the main idea is to use crude low-res geo from Plasticity, clean it up in Houdini, and at the same time import mid-poly. Then, use it in combination with Marmoset’s &lt;a href="https://marmoset.co/posts/revolutionize-your-3d-workflow-with-toolbags-bevel-shader/"&gt;rounded edge baking&lt;/a&gt;.&lt;br /&gt;
No more micro-beveling inside Plasticity or ZBrush.&lt;/p&gt;
</description>
</item>

<item>
<title>Houdini to Redshift: Keeping Colors Sharp</title>
<guid isPermaLink="false">285</guid>
<link>https://mail.photoindra.com/all/houdini-to-redshift-keeping-colors-sharp/</link>
<pubDate>Thu, 07 Nov 2024 05:39:30 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/houdini-to-redshift-keeping-colors-sharp/</comments>
<description>
&lt;p&gt;In Houdini, I usually assign color to primitives (though Houdini defaults to assigning it to “points”). However, if you want Redshift to recognize color attributes (using RSUserDataColor), you need to promote the Cd attribute to points or vertices, as Redshift doesn’t interpret it directly on polygons.&lt;/p&gt;
&lt;p&gt;Promoting Cd to points will result in color blending when you subdivide the model, which can create blurred colors. To maintain sharp color boundaries, promote Cd to vertices instead, as Redshift can understand vertex-level color attributes clearly.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houini_rs_colors_on_points_vs_vertices@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Baking textures with Redshift inside Houdini</title>
<guid isPermaLink="false">283</guid>
<link>https://mail.photoindra.com/all/baking-textures-with-redshift-inside-houdini/</link>
<pubDate>Wed, 14 Aug 2024 16:22:15 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/baking-textures-with-redshift-inside-houdini/</comments>
<description>
&lt;p&gt;I had to bake some texture maps in Redshift inside Houdini. Haven’t seen any clear tutorials on how to do that. So here is short guide.&lt;br /&gt;
Let say I created complex material mixing different texures, adjusting them with color corrections and gradients, noises. I’m happy how it looks in the Redshift renderer. And want to pass geometry to 3ds Max and setup materials with Corona.&lt;br /&gt;
General ideas is that you need to create custom AOVs to bake all those textures. Link to documentation about custom AOVs:&lt;br /&gt;
&lt;a href="https://help.maxon.net/r3d/houdini/en-us/#html/Custom+AOVs.html"&gt;https://help.maxon.net/r3d/houdini/en-us/#html/Custom+AOVs.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Here is how a test material network looks like:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/rs_mat_screenshot@2x.png.jpg" width="2560" height="1297" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Just for visual reference I’m adding black nulls to know what maps I want to bake. And connecting here those nulls to red nodes (StoreColorToAOV or StoreIntegerToAOV). Sadly you can’t use `opinput(“.”,0)` to get the name of connected node in MAT context like you can in SOP. So you’ll need to copy paste names from nulls.&lt;/p&gt;
&lt;p&gt;Create separate Redshift render node. In RenderMaps tab set Renders Maps Baking Enable.&lt;br /&gt;
If during testing you want to switch fast between textures resolution (from 512x512px to 1024, 2048, 4096) add an integer slider (with range from 0 to 3) to the inteface of RenderMaps tab (I called it indra_res_mult) and in the Output resolution add:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;512*pow(2, ch(&amp;quot;indra_res_mult&amp;quot;))&lt;/code&gt;&lt;/pre&gt;&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/rs_render_node@2x.png" width="867" height="642" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;If it is black and white texture like roughness or mask use data type scalar. And it will be saved as 8 bit greyscale image in this case. They need to be with gamma 1. But if you render them in Redshift in png it will save in gamma 2.2. So if you want gamma 1 you need to save in tif. And for channels like base color use png.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/rs_render_customAOV@2x.png.jpg" width="2560" height="2342" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Things to remember:&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;No tessellation on obj level. It took me more than 1 hour to figure out why my maps were looking strange and it was just this one checkbox.&lt;/li&gt;
&lt;li&gt;No overlapping uvs.&lt;/li&gt;
&lt;li&gt;Faces has to be coplanar. I personally didn’t have a problem with this. But &lt;a href="https://youtu.be/oa-rlJmduWA?t=212"&gt;in this video&lt;/a&gt; for C4D it is recommended to triangulate.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Tips:&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;If you have assigned groups to geometry and want to use it inside redshift materials: you need to promote it to vertices on SOP level and switch on “Output as Integer Attribute”. You don’t need to create node for each group. Just use GRP_* in group name field. (I usually start names of all groups that I want to keep with GRP_ in the beginning). Then read it inside materaials with “RS Integer User Data” node. Just writing “GRP_group_name” or “group:GRP_group_name” will not work. That’s why we need convert group to integer attribute.&lt;/li&gt;
&lt;li&gt;To write those masks you need to use “RS Store Integer to AOV”. StoreColorToAOV will not work.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/group_promote@2x.png.jpg" width="2560" height="1722" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Flexible Color Assignment in Houdini and Redshift</title>
<guid isPermaLink="false">282</guid>
<link>https://mail.photoindra.com/all/flexible-color-assignment-in-houdini-and-redshift/</link>
<pubDate>Wed, 07 Aug 2024 12:50:28 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/flexible-color-assignment-in-houdini-and-redshift/</comments>
<description>
&lt;p&gt;&lt;b&gt;How do you assign random colors from a specific set to objects and keep the setup flexible for changes with Houdini and Redshift?&lt;/b&gt;&lt;br /&gt;
Let’s say we have several plastic cups in a scene and 4 specific colors from a client.&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;Create a class attribute with a connectivity node (you can name it whatever you like).&lt;/li&gt;
&lt;li&gt;Promote it to the vertex level with an attribute promote.&lt;/li&gt;
&lt;li&gt;In the shader tree, use an RS Integer User Data node to bring in the attribute named “class” (or any other name that you gave it earlier).&lt;/li&gt;
&lt;li&gt;Connect it to an RS Jitter node (name it “max_variations_01”) and select “User Data ID” in Input ID Mode. In “integer jitter,” set the min to 0 and the max to 3, so we will have 4 variations. With this node, we only control the number of variations.&lt;/li&gt;
&lt;li&gt;Create another RS Jitter node (name it “lightness_range_01”). We will use it to create lightness variations. Keep the color to black and set Saturation Variation Max to 0. Now, with the Value Seed, you can control the randomness.&lt;/li&gt;
&lt;li&gt;Create an RS Color Ramp (name it “recolor_01”) with 4 colors from your client and set the interpolation to constant.&lt;/li&gt;
&lt;li&gt;After adjusting the seed on the “lightness_range_01” node, you will need to move the colors a little bit on “recolor_01” so each of them will end up in a range generated by “lightness_range_01.”&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/jitter_material_gif.gif" width="1280" height="720" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Another thing that you can do is to offset UVs for each object. So when you add textures for roughness, they will not repeat obviously. To do this, after the connectivity node, add an attribute wrangle node (Run over Vertices) with this:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;@uv.x+=rand(@class);&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Before:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/happy3d_corona_lesson_11_v001.uv_offset_off@2x.jpg" width="512" height="512" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;After:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/happy3d_corona_lesson_11_v001.uv_offset_on@2x.jpg" width="512" height="512" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Search and replace paths in Houdini</title>
<guid isPermaLink="false">280</guid>
<link>https://mail.photoindra.com/all/search-and-replace-paths-in-houdini/</link>
<pubDate>Tue, 28 Feb 2023 12:00:51 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/search-and-replace-paths-in-houdini/</comments>
<description>
&lt;p&gt;If you want to search and replace paths in multiple locations use Windows -&gt; Hscript Textport window&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/SCR-20230228-fu9@2x.png" width="647" height="369" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;and write:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;opchange \$DOWNLOADS&amp;quot;/wetransfer&amp;quot; \$JOB&amp;quot;/geo&amp;quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Backslash before variables will allow you to keep variables ($JOB and $DOWNLOADS in this case) and not expand them to full path.&lt;/p&gt;
&lt;p&gt;Another case: you imported FBX with materials. You can move them to mat context and change path with this:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;opchange ../../materials /mat&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Another example: on windows machine for some reason textures are using absolute paths and I want to change them all to the root of the job project:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;opchange &amp;quot;C:/Users/user_name/Dropbox/Work/project_name&amp;quot; \$JOB&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Documentation:&lt;br /&gt;
&lt;a href="https://www.sidefx.com/docs/houdini/commands/opchange.html"&gt;https://www.sidefx.com/docs/houdini/commands/opchange.html&lt;/a&gt;&lt;/p&gt;
</description>
</item>

<item>
<title>Piano modeling in Houdini</title>
<guid isPermaLink="false">277</guid>
<link>https://mail.photoindra.com/all/piano-modeling-in-houdini/</link>
<pubDate>Sat, 21 May 2022 04:51:19 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/piano-modeling-in-houdini/</comments>
<description>
&lt;p&gt;Still getting used to &lt;a href="https://alexeyvanzhula.gumroad.com/l/axefdr"&gt;Modeler 2022 plugin&lt;/a&gt; in Houdini.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/instruments_piano_v01_rev_01_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;WIP:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/712114258?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="960" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen title="Hunting for likes"&gt;&lt;/iframe&gt;
</description>
</item>

<item>
<title>Houdini – random coloring from image palette.</title>
<guid isPermaLink="false">276</guid>
<link>https://mail.photoindra.com/all/houdini-random-coloring-from-image-palette/</link>
<pubDate>Sat, 14 May 2022 17:51:25 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/houdini-random-coloring-from-image-palette/</comments>
<description>
&lt;p&gt;I was trying to optimize my coloring process for a project. And here is where I got right now:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_all@2x.png" width="512" height="814" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Coloring process:&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;Get palette that I want as a screenshot from here:&lt;br /&gt;
&lt;a href="https://paletton.com/#uid=60B0u0kllzcboPZgUH4pEuxt-pp"&gt;https://paletton.com/#uid=60B0u0kllzcboPZgUH4pEuxt-pp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Convert image to Utility-Texture_sRGB with target color space ACEScg using PYCO ColorSpace converter. (I still need to make some more test on this part by using this .exrs files as emission texute to compare colors with reference).&lt;br /&gt;
&lt;a href="https://pyco.gumroad.com/l/pycocs"&gt;https://pyco.gumroad.com/l/pycocs&lt;/a&gt;&lt;br /&gt;
Free with the code &lt;i&gt;free&lt;/i&gt; at checkout.&lt;/li&gt;
&lt;li&gt;From github you can install Color Palette Ramp – a Houdini HDA that creates a ramp based on a color palette from an image.&lt;br /&gt;
&lt;a href="https://github.com/jamesrobinsonvfx/colorpaletteramp"&gt;https://github.com/jamesrobinsonvfx/colorpaletteramp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;In Houdini using that HDA (colorpaletteramp) on SOP level create a ramp. If I got image from Paletton webpage then I use Stops -&gt; 20. But something around 10 works great for other images.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_171514.png" width="494" height="688" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="5"&gt;
&lt;li&gt;With OD Tools you can right click on result and “Palletize Ramp [OD]” to make colors separation constant and look more like palette instead or gradient. You can get OD Houdini Shelf Tools 2021 for $100 here:&lt;br /&gt;
&lt;a href="https://origamidigital.com/cart/index.php?route=product/product&amp;manufacturer_id=11&amp;product_id=66"&gt;https://origamidigital.com/cart/index.php?route=product/product&amp;manufacturer_id=11&amp;product_id=66&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_171537.png" width="505" height="1292" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="6"&gt;
&lt;li&gt;You can save this ramp in your OD Asset library for future use.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_174942.png" width="498" height="690" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="7"&gt;
&lt;li&gt;To color geometry based on disconnected pieces: first use “Connectivity” node on points to create integer attribute called id. Then use “Attribute Adjust Color” node with Adjustment Value -&gt; Pattern type set to Random. Randomization By -&gt; Custom attribute. Custom Attribute -&gt; id. Then with changing seed parameter you can get random options of color combinations.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_174557.png" width="508" height="898" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Results from 3 different ramps:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="540" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_01_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_02_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_03_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_04_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_05_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_06_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Hunting for likes</title>
<guid isPermaLink="false">275</guid>
<link>https://mail.photoindra.com/all/hunting-for-likes/</link>
<pubDate>Thu, 31 Mar 2022 09:04:14 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/hunting-for-likes/</comments>
<description>
&lt;p&gt;Small personal project that I made in Houdini testing Axiom GPU solver. Idea is that we always hunting for likes and “hearts”, setting traps with hot topics.&lt;br /&gt;
Looped video with music:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/694429700?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="540" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen title="Hunting for likes"&gt;&lt;/iframe&gt;
&lt;p&gt;Work in progress:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="1080" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_01.idea_sketch@2x.jpg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_04.sound_fx_recording@2x.jpg" width="1080" height="1080" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_04.jpg" width="1080" height="1080" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_03@2x.jpeg" width="1280" height="720" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_02@2x.jpeg" width="1280" height="720" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Stills:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="1080" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.static.cam1_static.0176@2x.jpeg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.static.cam1_static.0148@2x.jpeg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.static.cam1_static.0120@2x.jpeg" width="1080" height="1080" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Shaman – Houdini vs Blender</title>
<guid isPermaLink="false">273</guid>
<link>https://mail.photoindra.com/all/shaman/</link>
<pubDate>Sat, 11 Dec 2021 12:24:42 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/shaman/</comments>
<description>
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="800" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_01@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_01_wireframe@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_01_uv@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I wanted to try Blender for a long time. And came across a series of tutorials from YouTube channel &lt;a href="https://www.youtube.com/channel/UCLYrT1051M_6XkbEc5Te8PA"&gt;Blender 3d&lt;/a&gt;. After watching it became clear why so many people love this free software.&lt;br /&gt;
I started in Blender, but then jumped back into Houdini. With plugin called Modeler, you can repeat the steps without problems.&lt;br /&gt;
Here is a “turntable” and then a speedup walkthrough:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/654912254?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="640" height="640" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;UVs I did in RizomUV. They have just &lt;a href="https://www.rizom-lab.com/whats-new-in-rizomuv-2022/"&gt;released an update&lt;/a&gt;. And now you can insert one group into another. For example, a group of “feathers” can be included in the “head” group and packed together. One of my favorite tricks: you can pack the islands using their direction in 3D space. Want everything to be aligned by Y in UV space? Just a click of a button. By the way, the groups made in Houdini are visible in Rizom.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_uv_ver02_wip@2x.jpg" width="960" height="369" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;After UVing, I imported some groups into Zbrush to add details.&lt;br /&gt;
I baked from high to low in Marmoset. It also understands groups from Houdini, and therefore it is not necessary to export the “exploded” mesh separately, as it is usually done for baking in Substance. Another nice thing about it is the auto-reload of textures and geometry. If you change something in another program and save, Marmoset automatically shows those changes.&lt;br /&gt;
I textured in Substance Painter. Then I rendered in Houdini with Redshift.&lt;/p&gt;
&lt;p&gt;To make the cartoon outline: I cloned geometry. Assigned double-sided material to it. The “front” is transparent, and the “back” has only black emission material assigned to it. Then you add displacement with a constant instead of a texture. And that’s it. You can control the thickness of the line with the amount of displacement. And color of line with emission (yellow in this example):&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_material_setup@2x.jpg" width="905" height="540" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Then I repeated the same trick in Marmoset. It works when rendering. But displacement is not supported in the “viewer”. So if you want to send a link to the client, so he can rotate the model in the browser you need another approach:&lt;br /&gt;
I exported additional geometry from Houdini, but with reversed normals and a bit inflated with “Peak” node. Then in Marmoset I assined a new dark material without reflections, and set the Diffusion module to Unlit.&lt;/p&gt;
&lt;p&gt;Here is the result that you can rotate:&lt;/p&gt;
&lt;iframe src="https://photoindra.com/2021/shaman/index.html" width="640" height="640" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;And couple more renders from Redshift:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_02@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_03@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;&lt;i&gt;Original concept drawing was made by amazing &lt;a href="https://www.behance.net/gallery/19338867/La-Foret-Oublie-Characters"&gt;La Foret Oublie&lt;/a&gt;.&lt;/i&gt;&lt;br /&gt;
&lt;i&gt;And here is a &lt;a href="https://marmoset.co/posts/texturing-rendering-zbrush-sketches-toolbag-4/"&gt;great article&lt;/a&gt; on shading in Marmoset.&lt;/i&gt;&lt;/p&gt;
</description>
</item>

<item>
<title>Bomba &amp; Plena – vol 26</title>
<guid isPermaLink="false">269</guid>
<link>https://mail.photoindra.com/all/bomba-plena/</link>
<pubDate>Fri, 14 May 2021 11:29:12 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/bomba-plena/</comments>
<description>
&lt;iframe src="https://player.vimeo.com/video/549348031?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="960" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;Short looped animation to promote local old-school reggae event.&lt;/p&gt;
&lt;p&gt;Wallpaper in 4k:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/bomba_plena_vol_26_wallpaper_4k.jpeg.jpg" width="2560" height="1440" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/cam_static_hires_@2x.jpg" width="720" height="1280" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>What language to use in Panama.</title>
<guid isPermaLink="false">267</guid>
<link>https://mail.photoindra.com/all/what-language-to-use-in-panama/</link>
<pubDate>Thu, 25 Mar 2021 07:51:16 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/what-language-to-use-in-panama/</comments>
<description>
&lt;iframe src="https://player.vimeo.com/video/528805285?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="960" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;Sometimes I feel like writing or talking on social media. But deciding which language to use is tricky. Russian guys don’t know Spanish. Many Panamanians do not speak English fluently and certainly do not know Russian.&lt;br /&gt;
Everything I watch and read is in English. But I rarely speak it. All my notes are also in English.&lt;br /&gt;
After ten years in the tropics, I began to forget some Russian words. Whether it is worth remembering – I do not know. Better to learn to speak correct Spanish. I speak often as a dockman with zero benevolence. My woman suggested at the beginning of conversations with new people to mention that I am Russian and frankness is not considered rudeness in our country.&lt;/p&gt;
&lt;p&gt;In the &lt;a href="https://www.youtube.com/watch?v=82O5QkBdR00"&gt;video&lt;/a&gt; suggested by YouTube, a guy makes a dialog icon in five minutes in Sketch:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/sketch_app_idea.jpg" width="592" height="334" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;I liked the thumbnail and I made it in 3D.&lt;br /&gt;
At the same time, I practiced gluing panoramas with a high dynamic range. They are used for lighting most scenes in 3D. Just before the start of the quarantine, I took a few photos in the office. Unfortunately, the only program that glues them well (&lt;a href="https://www.ptgui.com/"&gt;PTGui 12&lt;/a&gt;) costs $300. The demo works without restrictions, but it fills everything with watermarks. In Photoshop, however, they can be erased even in 32 bits. I guess it’s ok for just a fun project.&lt;br /&gt;
I also figured out a little bit more about the differences in normals between points and primitives in houdini.&lt;/p&gt;
&lt;h2&gt;Sometimes I need to get vector graphics from illustrator and extrude it in houdini.&lt;/h2&gt;
&lt;p&gt;Constant &lt;b&gt;problem&lt;/b&gt; is that some parts can be flipped and will extrude in different direction. It’s happening because direction of paths when they were made in illustrator is different.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houdini_illustrator_problem_01@2x.jpg" width="1064" height="868" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;So I thought that I can add Attribute Expression node, set it to points, attribute to normal, set VEXpression from dropdown to Constant Value and write 0, 1, 0 in Contact Value to get normals pointing up. But it will not change the primitives normals. Because primitive normals are not actually an attribute. They are derived information that is calculated based upon the vertices that make up the primitive. As such, they cannot be modified. You can still use PolyExtrude, set it to point normal and extrusion mode to Existing. But you will end up with geo where some primitives normals will look “out” and others will look “in”. I don’t know if there is an easy fix for that.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houdini_illustrator_problem_02@2x.jpg" width="1064" height="868" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;So after you bringing you paths from illustrator you need first to separate primitives that are flipped. You can do this by using simple group node. Use only “keep by normals”, set the direction to 0,-1,0 and lower the spread angle. There is also Labs Split Primitives by Normal node that does exactly this but with less clicks.&lt;br /&gt;
Then use a reverse node. It WILL reverse vertex order in the primitives.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houdini_illustrator_problem_03@2x.jpg" width="1064" height="868" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Also takes are amazing. You can create different version of scene and in render node save an image with take name like this:&lt;br /&gt;
$HIP/render/r01/bubbles.static.s5.`chsop(“take”)`.$F2.tif&lt;br /&gt;
So `chsop(“take”)` part is responsible for take name. And in my case the output names will be:&lt;br /&gt;
bubbles.static.s5.blue_bubbles.01.tif&lt;br /&gt;
bubbles.static.s5.orange_bubbles.01.tif&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/bubbles.static.s5.orange_bubbles@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/bubbles.static.s5.blue_bubbles@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Ya no tengo apetito</title>
<guid isPermaLink="false">265</guid>
<link>https://mail.photoindra.com/all/ya-no-tengo-apetito/</link>
<pubDate>Tue, 08 Dec 2020 21:59:58 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/ya-no-tengo-apetito/</comments>
<description>
&lt;iframe src="https://player.vimeo.com/video/488773528?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="640" height="640" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;3d modeled in Modo, assembled in Houdini, rendered with Redshift and post in Nuke.&lt;/p&gt;
</description>
</item>

<item>
<title>Gif creation from sequence of images or mp4</title>
<guid isPermaLink="false">262</guid>
<link>https://mail.photoindra.com/all/gif-creation-from-sequence-of-images-or-mp4/</link>
<pubDate>Mon, 05 Oct 2020 08:52:47 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/gif-creation-from-sequence-of-images-or-mp4/</comments>
<description>
&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/465003616?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1"" width="360" height="640" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;&lt;/p&gt;
&lt;p&gt;When I need to create gifs I ususally use After Effects and plugin &lt;a href="https://aescripts.com/gifgun/"&gt;GifGun&lt;/a&gt;.&lt;br /&gt;
But sometimes it doesn’t play nice with colors. They have an “experimental engine” inside as an option but that result in huge size gifs.&lt;br /&gt;
Some people use &lt;a href="https://ezgif.com/video-to-gif"&gt;Ezgif&lt;/a&gt;. But it has some limitations and you need to upload to web.&lt;/p&gt;
&lt;p&gt;So this weekend I tried &lt;a href="https://imagemagick.org/script/download.php"&gt;ImageMagick&lt;/a&gt;.&lt;br /&gt;
It also installs FFMPEG. So you use it first to convert videos or image sequeces to gif first. And then compress it with ImageMagick. There is no GUI on Windows so you use Command Prompt (it’s like Terminal on Mac).&lt;/p&gt;
&lt;p&gt;First you change your current location to the directory where you files are with. Just type cd, drag and drop directory in the “terminal” and hit enter.&lt;/p&gt;
&lt;p&gt;I rendered 720x1280px image sequece in houdini with names like this:&lt;br /&gt;
CAM_top.0001.tif,&lt;br /&gt;
CAM_top.0002.tif,&lt;br /&gt;
...&lt;br /&gt;
CAM_top.0072.tif&lt;/p&gt;
&lt;p&gt;So to convert it to 24 fps gif with half-resolution and custom color pattern (for better colors representation) I used this line:&lt;br /&gt;
&lt;code&gt;&lt;br /&gt;
ffmpeg -r 24 -i CAM_top.%04d.tif -filter_complex “[0:v] scale=360:-1,split [a][b];[a] palettegen [p];[b][p] paletteuse” output_360px.gif&lt;br /&gt;
&lt;/code&gt;&lt;br /&gt;
And then compressed it with ImageMagick with this:&lt;br /&gt;
&lt;code&gt;&lt;br /&gt;
magick convert output_360px.gif -fuzz 1% -layers Optimize output_360px_fuzz1.gif&lt;br /&gt;
&lt;/code&gt;&lt;br /&gt;
fuzz1 – controls the amount of compresion. For example in some other tests fuzz5 gave ok results and smaller files size.&lt;br /&gt;
Here is another example of creating gif from mp4 – with rescaling, changing fps and single pattern generation. Split just means that we split this video into 2 and one use for pattern generation and other for convetion to gif after that:&lt;br /&gt;
&lt;code&gt;&lt;br /&gt;
ffmpeg -i video_source.mp4 -filter_complex “[0:v] fps=12,scale=540:-1,split [a][b];[a] palettegen [p];[b][p] paletteuse” output_name.gif&lt;br /&gt;
&lt;/code&gt;&lt;br /&gt;
Creating gif from mp4 – with rescaling, changing fps. Pattern generated for every frame – this helps if there are lots of color variations in gif but will increase the size:&lt;br /&gt;
&lt;code&gt;&lt;br /&gt;
ffmpeg -i video_source.mp4 -filter_complex “[0:v] fps=12,scale=w=540:h=-1,split [a][b];[a] palettegen=stats_mode=single [p];[b][p] paletteuse=new=1” output_name.gif&lt;br /&gt;
&lt;/code&gt;&lt;br /&gt;
Here is single camera view (6 MB):&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/CAM_top_360px_fuzz1.gif" width="360" height="640" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;And here is 15 seconds version with lots of motion (21 MB):&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/output_optim_fuzz03.gif" width="360" height="640" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Starting with Houdini</title>
<guid isPermaLink="false">259</guid>
<link>https://mail.photoindra.com/all/starting-with-houdini/</link>
<pubDate>Sat, 18 Apr 2020 18:12:37 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/starting-with-houdini/</comments>
<description>
&lt;p&gt;I’ve been using Houdini on and off for a couple of years. And some concepts are difficult to grasp. Especially if you coming from another 3d application.&lt;br /&gt;
There is short course that I can now recommend to anyone who want to start having fun with Houdini:&lt;br /&gt;
&lt;a href="https://www.hipflask.how/register-for-core-essentials"&gt;Houdini Made Easy 01 – The Core Essentials &lt;/a&gt; from Hipflask. It’s completely free.&lt;br /&gt;
They also have 50% off until April 30th, 2020 on 2 other courses. I finished them and can’t recommend enough. One of the best teachers and flow of presenting information.&lt;br /&gt;
Random short animation:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/409253151?background=1&amp;title=0&amp;portrait=0&amp;transparent=0&amp;byline=0&amp;sidedock=0&amp;autoplay=1&amp;muted=1&amp;loop=1&amp;autopause=0" width="320" height="320" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
</description>
</item>

<item>
<title>3d panama</title>
<guid isPermaLink="false">247</guid>
<link>https://mail.photoindra.com/all/3d-panama/</link>
<pubDate>Wed, 06 Nov 2019 12:12:45 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/3d-panama/</comments>
<description>
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/3dpanama_02_B@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;I made a small Telegram group for 3d artist in Panama to share tips and tricks. Here is link if you want to join:&lt;br /&gt;
&lt;a href="https://t.me/joinchat/GIQbBRUr4yHO5yrHz_VZ1g"&gt;https://t.me/joinchat/GIQbBRUr4yHO5yrHz_VZ1g&lt;/a&gt;&lt;/p&gt;
</description>
</item>

<item>
<title>Globe of Panama</title>
<guid isPermaLink="false">246</guid>
<link>https://mail.photoindra.com/all/globe-of-panama/</link>
<pubDate>Sun, 23 Jun 2019 10:55:17 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/globe-of-panama/</comments>
<description>
&lt;p&gt;Some experiments that I made figuring out the best way to pass geo from Houdini to Modo to render with V-Ray:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/flyer_background_03@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Heart in a cage</title>
<guid isPermaLink="false">239</guid>
<link>https://mail.photoindra.com/all/heart-in-a-cage/</link>
<pubDate>Thu, 28 Feb 2019 06:58:22 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/heart-in-a-cage/</comments>
<description>
&lt;p&gt;I’m in love with SideFx Houdini. It’s the most fun and liberating app for 3d that I’ve ever seen. It’s procedural, node-based and the developers are moving to the future (looking at you, Modo, with sad eyes). Around a month ago I also bite the bullet and bought Redshift rendering engine.&lt;br /&gt;
There are tons of tutorials for 3d artist of every level. But this one gives a really strong foundation of poly-modeling:&lt;br /&gt;
&lt;a href="https://www.udemy.com/vehicle-modeling-in-houdini-16-scifi-dropship/"&gt;Vehicle Modeling in Houdini 16.5&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I made a short animation:&lt;/p&gt;
&lt;div class="e2-text-video"&gt;
&lt;iframe src="https://player.vimeo.com/video/319772567" allow="autoplay" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;And here is how it was made:&lt;/p&gt;
&lt;div class="e2-text-video"&gt;
&lt;iframe src="https://player.vimeo.com/video/319764920" allow="autoplay" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="1080" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/heart_cage_frame_0021@2x.jpg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/heart_cage_frame_0052@2x.jpg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/heart_cage_network@2x.png" width="488" height="847" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;There is still a lot to learn. For example, I don’t know how to get motion blur to work when the amount of geometry is changing from frame to frame. I guess I need to calculate velocity on birth and pass it to final geo but don’t know how exactly to do that.&lt;/p&gt;
</description>
</item>


</channel>
</rss>