<?xml version="1.0" encoding="utf-8"?> 
<rss version="2.0"
  xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
  xmlns:atom="http://www.w3.org/2005/Atom">

<channel>

<title>Photoindra</title>
<link>https://mail.photoindra.com/</link>
<description>My telegram</description>
<author></author>
<language>en</language>
<generator>Aegea 11.3 (v4134)</generator>

<itunes:subtitle>My telegram</itunes:subtitle>
<itunes:image href="" />
<itunes:explicit></itunes:explicit>

<item>
<title>Real directors vs AI Slop – Human Pacing Still Wins</title>
<guid isPermaLink="false">287</guid>
<link>https://mail.photoindra.com/all/real-directors-use-of-seedance-2/</link>
<pubDate>Thu, 02 Apr 2026 09:56:03 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/real-directors-use-of-seedance-2/</comments>
<description>
&lt;p&gt;There’s “this will change everything” coming from AI news every day. Tons of influencers claiming that “this will end Hollywood”. And sometimes generated videos even look ok on small phone screens. What’s always lacking is pacing, consistency, and a non-AI-generic look on 4K TV screens.&lt;/p&gt;
&lt;p&gt;Higgsfield is an AI services aggregator. I don’t use it because of lots of negative reviews online. They sell you the idea of “unlimited” access but then people find out that it becomes “limited” very fast.&lt;br /&gt;
But their promo materials are quite fun to watch. Recently they came up with Arena Zero. A 10 minutes video generated with new Chinese model Seedance 2.0 (Yes, the one facing backlash in the USA for allegedly being trained on copyrighted Hollywood movies). The video was generated by 4 directors (from Kazakhstan and Russia) in 4 days. 2 for pre-production and 2 for color-grading, sound and editing. I’m pretty sure there were other designers involved helping generate scenes.&lt;/p&gt;
&lt;p&gt;Important part left behind during breakdown:&lt;br /&gt;
How many credits did they use? They talk about “5000 decisions”.&lt;br /&gt;
Did they have some sort of “unlimited” access without waiting in line?&lt;br /&gt;
Is it limited to 720p only? They didn’t talk about upscaling during interviews.&lt;br /&gt;
Also from what I see access to Seedance 2.0 right now is only for “business” accounts and “quick business verification required for access” whatever that means. I guess it is to make sure that the company is registered outside of USA to avoid possible future copyright claims.&lt;/p&gt;
&lt;p&gt;But it’s the first time I’ve watched 10 minutes of so-called AI slop and didn’t feel bored.&lt;/p&gt;
&lt;p&gt;Links to original videos:&lt;br /&gt;
&lt;b&gt;Arena Zero from Higgsfield AI&lt;/b&gt;&lt;/p&gt;
&lt;div class="e2-text-video"&gt;
&lt;iframe src="https://www.youtube.com/embed/qqcH-1Rk-ow?enablejsapi=1" allow="autoplay" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;&lt;b&gt;Arena Zero Full Breakdown: Seedance 2.0&lt;/b&gt;&lt;/p&gt;
&lt;div class="e2-text-video"&gt;
&lt;iframe src="https://www.youtube.com/embed/036jyFZWppw?enablejsapi=1" allow="autoplay" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;If you want to compare and look at influencer &lt;b&gt;generic AI-slop&lt;/b&gt;:&lt;/p&gt;
&lt;iframe width="100%" height="auto" style="aspect-ratio: 16 / 9;" src="https://www.youtube.com/embed/436f_9twoy8?start=34" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;
</description>
</item>

<item>
<title>Plasticity to Houdini 21 recipe</title>
<guid isPermaLink="false">286</guid>
<link>https://mail.photoindra.com/all/plasticity-to-houdini-21-recipe/</link>
<pubDate>Wed, 10 Sep 2025 14:19:35 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/plasticity-to-houdini-21-recipe/</comments>
<description>
&lt;p&gt;I’ve been using a lot &lt;a href="https://www.plasticity.xyz/"&gt;Plasticity&lt;/a&gt; lately. It’s a simple CAD tool for surface modeling – much easier than Fusion 360 or MOI. I can do modeling with booleans and curves there much faster.&lt;br /&gt;
Then I import the geometry into Houdini, and usually my import process looks like this:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/plast_import_01.jpg" width="571" height="815" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="1"&gt;
&lt;li&gt;Align the axis to the only correct way: Y is up, Z points forward to the viewer, and X points right.&lt;/li&gt;
&lt;li&gt;Scale to meters.&lt;/li&gt;
&lt;li&gt;Use a Match Size node in case I modeled without real-world dimensions.&lt;/li&gt;
&lt;li&gt;If you name your layers in Plasticity they will come into Houdini as a “path” attribute, so you can easily convert them to groups with the Groups from Name node.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To make this transition as easy as possible I use the new Recipes system (added in Houdini 20.5). To share those recipes between my different workstations, I created a recipes.json file inside the Houdini packages folder with this content:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;{
  &amp;quot;hpath&amp;quot;: &amp;quot;path_to_your_cloud_folder/Documents/Houdini/recipes_folder&amp;quot;,
  &amp;quot;env&amp;quot;: [
    { &amp;quot;HOUDINI_CUSTOM_RECIPE_DIR&amp;quot;: &amp;quot;path_to_your_cloud_folder/Documents/Houdini/recipes_folder&amp;quot; },
    { &amp;quot;HOUDINI_CUSTOM_RECIPE_LIBRARY&amp;quot;: &amp;quot;custom_recipes&amp;quot; }
  ]
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In this setup, when you save a new tool as a recipe, Houdini automatically locks the Save To field to Custom File Path, pointing to:&lt;br /&gt;
path_to_your_cloud_folder/Documents/Houdini/recipes_folder/olts/custom_recipes.hda&lt;/p&gt;
&lt;p&gt;Here’s example of another recipe that helps process low- and high-poly meshes for baking in Marmoset:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/plast_import_02@2x.jpg" width="959" height="677" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;I’ll post more details about this project later, but the main idea is to use crude low-res geo from Plasticity, clean it up in Houdini, and at the same time import mid-poly. Then, use it in combination with Marmoset’s &lt;a href="https://marmoset.co/posts/revolutionize-your-3d-workflow-with-toolbags-bevel-shader/"&gt;rounded edge baking&lt;/a&gt;.&lt;br /&gt;
No more micro-beveling inside Plasticity or ZBrush.&lt;/p&gt;
</description>
</item>

<item>
<title>Houdini to Redshift: Keeping Colors Sharp</title>
<guid isPermaLink="false">285</guid>
<link>https://mail.photoindra.com/all/houdini-to-redshift-keeping-colors-sharp/</link>
<pubDate>Thu, 07 Nov 2024 05:39:30 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/houdini-to-redshift-keeping-colors-sharp/</comments>
<description>
&lt;p&gt;In Houdini, I usually assign color to primitives (though Houdini defaults to assigning it to “points”). However, if you want Redshift to recognize color attributes (using RSUserDataColor), you need to promote the Cd attribute to points or vertices, as Redshift doesn’t interpret it directly on polygons.&lt;/p&gt;
&lt;p&gt;Promoting Cd to points will result in color blending when you subdivide the model, which can create blurred colors. To maintain sharp color boundaries, promote Cd to vertices instead, as Redshift can understand vertex-level color attributes clearly.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houini_rs_colors_on_points_vs_vertices@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Pony Halloween in Star5</title>
<guid isPermaLink="false">284</guid>
<link>https://mail.photoindra.com/all/pony-halloween-in-star5/</link>
<pubDate>Wed, 23 Oct 2024 17:53:53 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/pony-halloween-in-star5/</comments>
<description>
&lt;p&gt;While everyone is arguing what AI is best to keep people lying on the grass (Stable 3.5 or Flux), I’m using Pony Diffusion to grab some souls for this Halloween and form a “Cult of Ai”.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="540" data-ratio="0.5625"&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_04@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_07@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_06@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_05@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_03@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_02@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_01@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_story_08@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Was made with AI in ComfyUI on Windows. But you can build similar graph in Colab. Just need to customize the input and output folder paths.&lt;/p&gt;
</description>
</item>

<item>
<title>Baking textures with Redshift inside Houdini</title>
<guid isPermaLink="false">283</guid>
<link>https://mail.photoindra.com/all/baking-textures-with-redshift-inside-houdini/</link>
<pubDate>Wed, 14 Aug 2024 16:22:15 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/baking-textures-with-redshift-inside-houdini/</comments>
<description>
&lt;p&gt;I had to bake some texture maps in Redshift inside Houdini. Haven’t seen any clear tutorials on how to do that. So here is short guide.&lt;br /&gt;
Let say I created complex material mixing different texures, adjusting them with color corrections and gradients, noises. I’m happy how it looks in the Redshift renderer. And want to pass geometry to 3ds Max and setup materials with Corona.&lt;br /&gt;
General ideas is that you need to create custom AOVs to bake all those textures. Link to documentation about custom AOVs:&lt;br /&gt;
&lt;a href="https://help.maxon.net/r3d/houdini/en-us/#html/Custom+AOVs.html"&gt;https://help.maxon.net/r3d/houdini/en-us/#html/Custom+AOVs.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Here is how a test material network looks like:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/rs_mat_screenshot@2x.png.jpg" width="2560" height="1297" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Just for visual reference I’m adding black nulls to know what maps I want to bake. And connecting here those nulls to red nodes (StoreColorToAOV or StoreIntegerToAOV). Sadly you can’t use `opinput(“.”,0)` to get the name of connected node in MAT context like you can in SOP. So you’ll need to copy paste names from nulls.&lt;/p&gt;
&lt;p&gt;Create separate Redshift render node. In RenderMaps tab set Renders Maps Baking Enable.&lt;br /&gt;
If during testing you want to switch fast between textures resolution (from 512x512px to 1024, 2048, 4096) add an integer slider (with range from 0 to 3) to the inteface of RenderMaps tab (I called it indra_res_mult) and in the Output resolution add:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;512*pow(2, ch(&amp;quot;indra_res_mult&amp;quot;))&lt;/code&gt;&lt;/pre&gt;&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/rs_render_node@2x.png" width="867" height="642" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;If it is black and white texture like roughness or mask use data type scalar. And it will be saved as 8 bit greyscale image in this case. They need to be with gamma 1. But if you render them in Redshift in png it will save in gamma 2.2. So if you want gamma 1 you need to save in tif. And for channels like base color use png.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/rs_render_customAOV@2x.png.jpg" width="2560" height="2342" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Things to remember:&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;No tessellation on obj level. It took me more than 1 hour to figure out why my maps were looking strange and it was just this one checkbox.&lt;/li&gt;
&lt;li&gt;No overlapping uvs.&lt;/li&gt;
&lt;li&gt;Faces has to be coplanar. I personally didn’t have a problem with this. But &lt;a href="https://youtu.be/oa-rlJmduWA?t=212"&gt;in this video&lt;/a&gt; for C4D it is recommended to triangulate.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Tips:&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;If you have assigned groups to geometry and want to use it inside redshift materials: you need to promote it to vertices on SOP level and switch on “Output as Integer Attribute”. You don’t need to create node for each group. Just use GRP_* in group name field. (I usually start names of all groups that I want to keep with GRP_ in the beginning). Then read it inside materaials with “RS Integer User Data” node. Just writing “GRP_group_name” or “group:GRP_group_name” will not work. That’s why we need convert group to integer attribute.&lt;/li&gt;
&lt;li&gt;To write those masks you need to use “RS Store Integer to AOV”. StoreColorToAOV will not work.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/group_promote@2x.png.jpg" width="2560" height="1722" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Flexible Color Assignment in Houdini and Redshift</title>
<guid isPermaLink="false">282</guid>
<link>https://mail.photoindra.com/all/flexible-color-assignment-in-houdini-and-redshift/</link>
<pubDate>Wed, 07 Aug 2024 12:50:28 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/flexible-color-assignment-in-houdini-and-redshift/</comments>
<description>
&lt;p&gt;&lt;b&gt;How do you assign random colors from a specific set to objects and keep the setup flexible for changes with Houdini and Redshift?&lt;/b&gt;&lt;br /&gt;
Let’s say we have several plastic cups in a scene and 4 specific colors from a client.&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;Create a class attribute with a connectivity node (you can name it whatever you like).&lt;/li&gt;
&lt;li&gt;Promote it to the vertex level with an attribute promote.&lt;/li&gt;
&lt;li&gt;In the shader tree, use an RS Integer User Data node to bring in the attribute named “class” (or any other name that you gave it earlier).&lt;/li&gt;
&lt;li&gt;Connect it to an RS Jitter node (name it “max_variations_01”) and select “User Data ID” in Input ID Mode. In “integer jitter,” set the min to 0 and the max to 3, so we will have 4 variations. With this node, we only control the number of variations.&lt;/li&gt;
&lt;li&gt;Create another RS Jitter node (name it “lightness_range_01”). We will use it to create lightness variations. Keep the color to black and set Saturation Variation Max to 0. Now, with the Value Seed, you can control the randomness.&lt;/li&gt;
&lt;li&gt;Create an RS Color Ramp (name it “recolor_01”) with 4 colors from your client and set the interpolation to constant.&lt;/li&gt;
&lt;li&gt;After adjusting the seed on the “lightness_range_01” node, you will need to move the colors a little bit on “recolor_01” so each of them will end up in a range generated by “lightness_range_01.”&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/jitter_material_gif.gif" width="1280" height="720" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Another thing that you can do is to offset UVs for each object. So when you add textures for roughness, they will not repeat obviously. To do this, after the connectivity node, add an attribute wrangle node (Run over Vertices) with this:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;@uv.x+=rand(@class);&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Before:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/happy3d_corona_lesson_11_v001.uv_offset_off@2x.jpg" width="512" height="512" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;After:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/happy3d_corona_lesson_11_v001.uv_offset_on@2x.jpg" width="512" height="512" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Modeling low poly pirate ship in 3D</title>
<guid isPermaLink="false">281</guid>
<link>https://mail.photoindra.com/all/modeling-low-poly-pirate-ship-in-3d/</link>
<pubDate>Wed, 24 May 2023 08:09:31 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/modeling-low-poly-pirate-ship-in-3d/</comments>
<description>
&lt;iframe src="https://player.vimeo.com/video/820526237?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1"  width="800" height="800" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen title="Pirate ship"&gt;&lt;/iframe&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/pirate_ship_texures_preview_01_800px@2x.jpg" width="800" height="450" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Timelapse of modeling low poly pirate ship in 3d using Houdini, Zbrush, Substance Painter, RizomUV and Redshift:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/824834538?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="800" height="450" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;script src="https://player.vimeo.com/api/player.js"&gt;&lt;/script&gt;
&lt;p&gt;If you prefer video on youtube with chapters &lt;a href="https://www.youtube.com/watch?v=vrxc-6IOlek"&gt;you can watch it here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Small concept drawing that you see in timelaps I found on pinterest quite a while ago. Sadly I don’t know the author.&lt;br /&gt;
You can rotate model in your browser here:&lt;/p&gt;
&lt;div class="sketchfab-embed-wrapper"&gt;&lt;iframe title="Pirate ship" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" width="800" height="450"  allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share src="https://sketchfab.com/models/3938caf042174c449ea3479bd6183508/embed?autospin=1&amp;autostart=1"&gt; &lt;/iframe&gt;
&lt;/div&gt;&lt;p&gt;I also passed geometry to Meta Spark Studio to make an IG filter.&lt;br /&gt;
Here is link to open filter in Instagram:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/ig_filter_pirate_ship_qr_code@2x.jpg" width="400" height="400" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;If you want to do that there are couple of things to remember:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/wip_SCR-20230510-aib_crop_02.png" width="1126" height="1193" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/wip_SCR-20230510-aib_crop.png.jpg" width="2560" height="1837" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Search and replace paths in Houdini</title>
<guid isPermaLink="false">280</guid>
<link>https://mail.photoindra.com/all/search-and-replace-paths-in-houdini/</link>
<pubDate>Tue, 28 Feb 2023 12:00:51 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/search-and-replace-paths-in-houdini/</comments>
<description>
&lt;p&gt;If you want to search and replace paths in multiple locations use Windows -&gt; Hscript Textport window&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/SCR-20230228-fu9@2x.png" width="647" height="369" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;and write:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;opchange \$DOWNLOADS&amp;quot;/wetransfer&amp;quot; \$JOB&amp;quot;/geo&amp;quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Backslash before variables will allow you to keep variables ($JOB and $DOWNLOADS in this case) and not expand them to full path.&lt;/p&gt;
&lt;p&gt;Another case: you imported FBX with materials. You can move them to mat context and change path with this:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;opchange ../../materials /mat&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Another example: on windows machine for some reason textures are using absolute paths and I want to change them all to the root of the job project:&lt;/p&gt;
&lt;pre class="e2-text-code"&gt;&lt;code class=""&gt;opchange &amp;quot;C:/Users/user_name/Dropbox/Work/project_name&amp;quot; \$JOB&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Documentation:&lt;br /&gt;
&lt;a href="https://www.sidefx.com/docs/houdini/commands/opchange.html"&gt;https://www.sidefx.com/docs/houdini/commands/opchange.html&lt;/a&gt;&lt;/p&gt;
</description>
</item>

<item>
<title>Zbrush lowpoly modeling and polygroups.</title>
<guid isPermaLink="false">278</guid>
<link>https://mail.photoindra.com/all/zbrush-lowpoly-modeling-and-polygroups/</link>
<pubDate>Sun, 12 Jun 2022 08:06:08 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/zbrush-lowpoly-modeling-and-polygroups/</comments>
<description>
&lt;p&gt;If you ever wondered why during polymodeling in zbrush you keep selecting several polygroups by CTRL+Shift clicking on only one:&lt;br /&gt;
Zbrush for most of brushes and selections uses vertices. And if your polygon doesn’t have any other polygon in same polygroup next to vertex that you clicked – it just selects also next polygoup.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/zbrush_polygroup_isolation_problem_2022_06_12_06_26_38.gif" width="800" height="335" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;So in this case you can use Select Lasso tool and click on one edge to hide full polygon loop. And then invert the visibility by CTRL+Shift dragging outside of mesh:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/zbrush_polygroup_isolation_solution_2022_06_12.gif" width="800" height="335" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Piano modeling in Houdini</title>
<guid isPermaLink="false">277</guid>
<link>https://mail.photoindra.com/all/piano-modeling-in-houdini/</link>
<pubDate>Sat, 21 May 2022 04:51:19 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/piano-modeling-in-houdini/</comments>
<description>
&lt;p&gt;Still getting used to &lt;a href="https://alexeyvanzhula.gumroad.com/l/axefdr"&gt;Modeler 2022 plugin&lt;/a&gt; in Houdini.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/instruments_piano_v01_rev_01_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;WIP:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/712114258?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="960" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen title="Hunting for likes"&gt;&lt;/iframe&gt;
</description>
</item>

<item>
<title>Houdini – random coloring from image palette.</title>
<guid isPermaLink="false">276</guid>
<link>https://mail.photoindra.com/all/houdini-random-coloring-from-image-palette/</link>
<pubDate>Sat, 14 May 2022 17:51:25 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/houdini-random-coloring-from-image-palette/</comments>
<description>
&lt;p&gt;I was trying to optimize my coloring process for a project. And here is where I got right now:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_all@2x.png" width="512" height="814" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Coloring process:&lt;/p&gt;
&lt;ol start="1"&gt;
&lt;li&gt;Get palette that I want as a screenshot from here:&lt;br /&gt;
&lt;a href="https://paletton.com/#uid=60B0u0kllzcboPZgUH4pEuxt-pp"&gt;https://paletton.com/#uid=60B0u0kllzcboPZgUH4pEuxt-pp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Convert image to Utility-Texture_sRGB with target color space ACEScg using PYCO ColorSpace converter. (I still need to make some more test on this part by using this .exrs files as emission texute to compare colors with reference).&lt;br /&gt;
&lt;a href="https://pyco.gumroad.com/l/pycocs"&gt;https://pyco.gumroad.com/l/pycocs&lt;/a&gt;&lt;br /&gt;
Free with the code &lt;i&gt;free&lt;/i&gt; at checkout.&lt;/li&gt;
&lt;li&gt;From github you can install Color Palette Ramp – a Houdini HDA that creates a ramp based on a color palette from an image.&lt;br /&gt;
&lt;a href="https://github.com/jamesrobinsonvfx/colorpaletteramp"&gt;https://github.com/jamesrobinsonvfx/colorpaletteramp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;In Houdini using that HDA (colorpaletteramp) on SOP level create a ramp. If I got image from Paletton webpage then I use Stops -&gt; 20. But something around 10 works great for other images.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_171514.png" width="494" height="688" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="5"&gt;
&lt;li&gt;With OD Tools you can right click on result and “Palletize Ramp [OD]” to make colors separation constant and look more like palette instead or gradient. You can get OD Houdini Shelf Tools 2021 for $100 here:&lt;br /&gt;
&lt;a href="https://origamidigital.com/cart/index.php?route=product/product&amp;manufacturer_id=11&amp;product_id=66"&gt;https://origamidigital.com/cart/index.php?route=product/product&amp;manufacturer_id=11&amp;product_id=66&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_171537.png" width="505" height="1292" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="6"&gt;
&lt;li&gt;You can save this ramp in your OD Asset library for future use.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_174942.png" width="498" height="690" alt="" /&gt;
&lt;/div&gt;
&lt;ol start="7"&gt;
&lt;li&gt;To color geometry based on disconnected pieces: first use “Connectivity” node on points to create integer attribute called id. Then use “Attribute Adjust Color” node with Adjustment Value -&gt; Pattern type set to Random. Randomization By -&gt; Custom attribute. Custom Attribute -&gt; id. Then with changing seed parameter you can get random options of color combinations.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/screenshot_2022-05-14_174557.png" width="508" height="898" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Results from 3 different ramps:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="540" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_01_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_02_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_03_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_04_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_05_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/misc_arcade_machine_v01_rev_06_redshift@2x.jpg" width="540" height="540" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Hunting for likes</title>
<guid isPermaLink="false">275</guid>
<link>https://mail.photoindra.com/all/hunting-for-likes/</link>
<pubDate>Thu, 31 Mar 2022 09:04:14 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/hunting-for-likes/</comments>
<description>
&lt;p&gt;Small personal project that I made in Houdini testing Axiom GPU solver. Idea is that we always hunting for likes and “hearts”, setting traps with hot topics.&lt;br /&gt;
Looped video with music:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/694429700?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="540" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen title="Hunting for likes"&gt;&lt;/iframe&gt;
&lt;p&gt;Work in progress:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="1080" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_01.idea_sketch@2x.jpg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_04.sound_fx_recording@2x.jpg" width="1080" height="1080" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_04.jpg" width="1080" height="1080" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_03@2x.jpeg" width="1280" height="720" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.wip_02@2x.jpeg" width="1280" height="720" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Stills:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="1080" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.static.cam1_static.0176@2x.jpeg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.static.cam1_static.0148@2x.jpeg" width="1080" height="1080" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/star5_libre_v05.static.cam1_static.0120@2x.jpeg" width="1080" height="1080" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Destiny 2 – Russian clan – GOT</title>
<guid isPermaLink="false">274</guid>
<link>https://mail.photoindra.com/all/destiny-2-russian-clan-got/</link>
<pubDate>Fri, 18 Feb 2022 07:45:19 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/destiny-2-russian-clan-got/</comments>
<description>
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/destiny_20220204_133307@2x.jpg" width="1280" height="536" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Я в Дестини играю с 2014 года. С длинными паузами между заходами. Но она все так-же радует. Для меня это как спорт. Какая разница, смотришь ли ты футбольный матч по телеку или матч по “испытанию осириса”, где сумасшедший выходит один против троих и выигрывает безупречно семь раундов?&lt;br /&gt;
Одна из самых красивых и сложных частей игры доступна только командным игрокам. Недавно записался в русско-говорящий клан. GUARDIANS OF THE TRUTH [GOT]. Очень приятные ребята. Все показывают, спокойно и доступно объясняют. Я, как новичку и полагается, иногда туплю. Если вы гоняете в дестини иногда по выходным – добро пожаловать.&lt;/p&gt;
&lt;p&gt;Ссылка на наш клан на сайте Bungie:&lt;br /&gt;
&lt;a href="https://www.bungie.net/ru/ClanV2?groupid=2749474"&gt;https://www.bungie.net/ru/ClanV2?groupid=2749474&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ccылка в Discord:&lt;br /&gt;
&lt;a href="https://discord.gg/kkESWJ7"&gt;https://discord.gg/kkESWJ7&lt;/a&gt;&lt;/p&gt;
</description>
</item>

<item>
<title>Shaman – Houdini vs Blender</title>
<guid isPermaLink="false">273</guid>
<link>https://mail.photoindra.com/all/shaman/</link>
<pubDate>Sat, 11 Dec 2021 12:24:42 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/shaman/</comments>
<description>
&lt;div class="e2-text-picture"&gt;
&lt;div class="fotorama" data-width="800" data-ratio="1"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_01@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_01_wireframe@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_01_uv@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I wanted to try Blender for a long time. And came across a series of tutorials from YouTube channel &lt;a href="https://www.youtube.com/channel/UCLYrT1051M_6XkbEc5Te8PA"&gt;Blender 3d&lt;/a&gt;. After watching it became clear why so many people love this free software.&lt;br /&gt;
I started in Blender, but then jumped back into Houdini. With plugin called Modeler, you can repeat the steps without problems.&lt;br /&gt;
Here is a “turntable” and then a speedup walkthrough:&lt;/p&gt;
&lt;iframe src="https://player.vimeo.com/video/654912254?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="640" height="640" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;UVs I did in RizomUV. They have just &lt;a href="https://www.rizom-lab.com/whats-new-in-rizomuv-2022/"&gt;released an update&lt;/a&gt;. And now you can insert one group into another. For example, a group of “feathers” can be included in the “head” group and packed together. One of my favorite tricks: you can pack the islands using their direction in 3D space. Want everything to be aligned by Y in UV space? Just a click of a button. By the way, the groups made in Houdini are visible in Rizom.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_uv_ver02_wip@2x.jpg" width="960" height="369" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;After UVing, I imported some groups into Zbrush to add details.&lt;br /&gt;
I baked from high to low in Marmoset. It also understands groups from Houdini, and therefore it is not necessary to export the “exploded” mesh separately, as it is usually done for baking in Substance. Another nice thing about it is the auto-reload of textures and geometry. If you change something in another program and save, Marmoset automatically shows those changes.&lt;br /&gt;
I textured in Substance Painter. Then I rendered in Houdini with Redshift.&lt;/p&gt;
&lt;p&gt;To make the cartoon outline: I cloned geometry. Assigned double-sided material to it. The “front” is transparent, and the “back” has only black emission material assigned to it. Then you add displacement with a constant instead of a texture. And that’s it. You can control the thickness of the line with the amount of displacement. And color of line with emission (yellow in this example):&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_material_setup@2x.jpg" width="905" height="540" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Then I repeated the same trick in Marmoset. It works when rendering. But displacement is not supported in the “viewer”. So if you want to send a link to the client, so he can rotate the model in the browser you need another approach:&lt;br /&gt;
I exported additional geometry from Houdini, but with reversed normals and a bit inflated with “Peak” node. Then in Marmoset I assined a new dark material without reflections, and set the Diffusion module to Unlit.&lt;/p&gt;
&lt;p&gt;Here is the result that you can rotate:&lt;/p&gt;
&lt;iframe src="https://photoindra.com/2021/shaman/index.html" width="640" height="640" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;And couple more renders from Redshift:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_02@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/shaman_06_03@2x.jpg" width="800" height="800" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;&lt;i&gt;Original concept drawing was made by amazing &lt;a href="https://www.behance.net/gallery/19338867/La-Foret-Oublie-Characters"&gt;La Foret Oublie&lt;/a&gt;.&lt;/i&gt;&lt;br /&gt;
&lt;i&gt;And here is a &lt;a href="https://marmoset.co/posts/texturing-rendering-zbrush-sketches-toolbag-4/"&gt;great article&lt;/a&gt; on shading in Marmoset.&lt;/i&gt;&lt;/p&gt;
</description>
</item>

<item>
<title>Panama – Playa Caracol</title>
<guid isPermaLink="false">272</guid>
<link>https://mail.photoindra.com/all/panama-playa-caracol/</link>
<pubDate>Tue, 06 Jul 2021 07:48:09 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/panama-playa-caracol/</comments>
<description>
&lt;p&gt;I’m trying to figure out how to live on both Windows and MacOS at the same time. For the sake of a little test I took some vieos with a selfie stick. I threw it on Windows in Davinci 17 (a free program for working with videos). In version 17 they added a button to search for “lost” files. I’ve sent the project without videos via dropbox to a M1 Macbook and added music on my Mac. In order not to send the entire project, you can now export only the “timeline”. It weighs almost nothing so you can send it even via the phone. Opened it on Windows – everything just works.&lt;/p&gt;
&lt;div style="padding:56.25% 0 0 0;position:relative;"&gt;&lt;iframe src="https://player.vimeo.com/video/571349316?badge=0&amp;amp;autopause=0&amp;amp;player_id=0&amp;amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen style="position:absolute;top:0;left:0;width:100%;height:100%;" title="2021_05_playa_caracol_ver01_vimeo"&gt;&lt;/iframe&gt;
&lt;/div&gt;&lt;script src="https://player.vimeo.com/api/player.js"&gt;&lt;/script&gt;
</description>
</item>

<item>
<title>Bomba &amp; Plena – vol 26</title>
<guid isPermaLink="false">269</guid>
<link>https://mail.photoindra.com/all/bomba-plena/</link>
<pubDate>Fri, 14 May 2021 11:29:12 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/bomba-plena/</comments>
<description>
&lt;iframe src="https://player.vimeo.com/video/549348031?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="960" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;Short looped animation to promote local old-school reggae event.&lt;/p&gt;
&lt;p&gt;Wallpaper in 4k:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/bomba_plena_vol_26_wallpaper_4k.jpeg.jpg" width="2560" height="1440" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/cam_static_hires_@2x.jpg" width="720" height="1280" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Saturday liquid 01.</title>
<guid isPermaLink="false">268</guid>
<link>https://mail.photoindra.com/all/saturday-liquid-01/</link>
<pubDate>Tue, 27 Apr 2021 14:07:56 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/saturday-liquid-01/</comments>
<description>
&lt;iframe width="960" height="540" src="https://www.youtube.com/embed/AXZ6LoQdhn8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;
</description>
</item>

<item>
<title>What language to use in Panama.</title>
<guid isPermaLink="false">267</guid>
<link>https://mail.photoindra.com/all/what-language-to-use-in-panama/</link>
<pubDate>Thu, 25 Mar 2021 07:51:16 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/what-language-to-use-in-panama/</comments>
<description>
&lt;iframe src="https://player.vimeo.com/video/528805285?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="540" height="960" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;Sometimes I feel like writing or talking on social media. But deciding which language to use is tricky. Russian guys don’t know Spanish. Many Panamanians do not speak English fluently and certainly do not know Russian.&lt;br /&gt;
Everything I watch and read is in English. But I rarely speak it. All my notes are also in English.&lt;br /&gt;
After ten years in the tropics, I began to forget some Russian words. Whether it is worth remembering – I do not know. Better to learn to speak correct Spanish. I speak often as a dockman with zero benevolence. My woman suggested at the beginning of conversations with new people to mention that I am Russian and frankness is not considered rudeness in our country.&lt;/p&gt;
&lt;p&gt;In the &lt;a href="https://www.youtube.com/watch?v=82O5QkBdR00"&gt;video&lt;/a&gt; suggested by YouTube, a guy makes a dialog icon in five minutes in Sketch:&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/sketch_app_idea.jpg" width="592" height="334" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;I liked the thumbnail and I made it in 3D.&lt;br /&gt;
At the same time, I practiced gluing panoramas with a high dynamic range. They are used for lighting most scenes in 3D. Just before the start of the quarantine, I took a few photos in the office. Unfortunately, the only program that glues them well (&lt;a href="https://www.ptgui.com/"&gt;PTGui 12&lt;/a&gt;) costs $300. The demo works without restrictions, but it fills everything with watermarks. In Photoshop, however, they can be erased even in 32 bits. I guess it’s ok for just a fun project.&lt;br /&gt;
I also figured out a little bit more about the differences in normals between points and primitives in houdini.&lt;/p&gt;
&lt;h2&gt;Sometimes I need to get vector graphics from illustrator and extrude it in houdini.&lt;/h2&gt;
&lt;p&gt;Constant &lt;b&gt;problem&lt;/b&gt; is that some parts can be flipped and will extrude in different direction. It’s happening because direction of paths when they were made in illustrator is different.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houdini_illustrator_problem_01@2x.jpg" width="1064" height="868" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;So I thought that I can add Attribute Expression node, set it to points, attribute to normal, set VEXpression from dropdown to Constant Value and write 0, 1, 0 in Contact Value to get normals pointing up. But it will not change the primitives normals. Because primitive normals are not actually an attribute. They are derived information that is calculated based upon the vertices that make up the primitive. As such, they cannot be modified. You can still use PolyExtrude, set it to point normal and extrusion mode to Existing. But you will end up with geo where some primitives normals will look “out” and others will look “in”. I don’t know if there is an easy fix for that.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houdini_illustrator_problem_02@2x.jpg" width="1064" height="868" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;So after you bringing you paths from illustrator you need first to separate primitives that are flipped. You can do this by using simple group node. Use only “keep by normals”, set the direction to 0,-1,0 and lower the spread angle. There is also Labs Split Primitives by Normal node that does exactly this but with less clicks.&lt;br /&gt;
Then use a reverse node. It WILL reverse vertex order in the primitives.&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/houdini_illustrator_problem_03@2x.jpg" width="1064" height="868" alt="" /&gt;
&lt;/div&gt;
&lt;p&gt;Also takes are amazing. You can create different version of scene and in render node save an image with take name like this:&lt;br /&gt;
$HIP/render/r01/bubbles.static.s5.`chsop(“take”)`.$F2.tif&lt;br /&gt;
So `chsop(“take”)` part is responsible for take name. And in my case the output names will be:&lt;br /&gt;
bubbles.static.s5.blue_bubbles.01.tif&lt;br /&gt;
bubbles.static.s5.orange_bubbles.01.tif&lt;/p&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/bubbles.static.s5.orange_bubbles@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;/div&gt;
&lt;div class="e2-text-picture"&gt;
&lt;img src="https://mail.photoindra.com/pictures/bubbles.static.s5.blue_bubbles@2x.jpg" width="540" height="960" alt="" /&gt;
&lt;/div&gt;
</description>
</item>

<item>
<title>Ya no tengo apetito</title>
<guid isPermaLink="false">265</guid>
<link>https://mail.photoindra.com/all/ya-no-tengo-apetito/</link>
<pubDate>Tue, 08 Dec 2020 21:59:58 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/ya-no-tengo-apetito/</comments>
<description>
&lt;iframe src="https://player.vimeo.com/video/488773528?autoplay=1&amp;loop=1&amp;autopause=0&amp;muted=1" width="640" height="640" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;3d modeled in Modo, assembled in Houdini, rendered with Redshift and post in Nuke.&lt;/p&gt;
</description>
</item>

<item>
<title>Characters per second</title>
<guid isPermaLink="false">264</guid>
<link>https://mail.photoindra.com/all/characters-per-second/</link>
<pubDate>Fri, 04 Dec 2020 15:12:54 -0500</pubDate>
<author></author>
<comments>https://mail.photoindra.com/all/characters-per-second/</comments>
<description>
&lt;p&gt;I’ve been asked couple of times to make text appear longer on the screen during video editing. So today I finally goolged average reading speed for different languages. It’s 17 characters per second for adult programs. And 13 characters per second for children’s programs.&lt;br /&gt;
For example phrase “Netflix is an interesting reference for different video related topics” should stay on the screen around 4 seconds (61/17). Here is interesting guideline for text style:&lt;br /&gt;
&lt;a href="https://partnerhelp.netflixstudios.com/hc/en-us/articles/217349997-Castilian-Latin-American-Spanish-Timed-Text-Style-Guide"&gt;https://partnerhelp.netflixstudios.com/hc/en-us/articles/217349997-Castilian-Latin-American-Spanish-Timed-Text-Style-Guide&lt;/a&gt;&lt;/p&gt;
</description>
</item>


</channel>
</rss>