Category Archives: Technology

POINT CLOUD RENDERING: Blender 3.1

Point clouds may be opened easily enough in Blender but they will not render out unless particular attributes have been assigned to the point cloud using Blender’s Geometry Nodes capability – essentially a real, renderable 3d primitive is located at each point.To import a point cloud the format must be PLY; either export from the point cloud software as PLY or use CloudCompare to open the point cloud and resave as PLY. This may also be a good opportunity to optimise the point cloud file with the CloudCompare Subsample tool – make it smaller; point clouds, particularly from laser scanners, can be very large and will cause performance issues even on high spec machines.

1. In Blender [3.1 or higher] in a new empty file select :

    FILE - IMPORT - Standford (.PLY)

and browse / select the point cloud file

In this example a relatively small file has been imported – this export is from the iPhone Lidar App EveryPoint (Pressing CTL-ALT-Q swaps it to 4 viewports of the point cloud).

If you do not want / do not have colours in your point cloud then you can skip step 2 and go straight to step 3

2. Change to the SHADING tab along the top and click NEW for a new material. Give this a name like PCMaterial.

Now some particular attributes need to be added: from the middle bar click

    ADD - INPUT - ATTRIBUTE

Drop this node to the left and in the name field call it Col – note, must be capital “C”

To the right of this add another node for the colour value:

    ADD - COLOR - HUE SATURATION 

And set the Saturation field to 2.

These nodes can now be linked by dragging from the input / output points. Drag Color to Color and Color to Base Color so the node arrangement looks like this:

3. Switch to the GEOMETRY NODES tab from the top of the screen and click NEW in the middle bar to create a base Group Input and Group Output.

Create 2 new instance nodes (if you drop these on the line they will automatically link up properly)

    ADD - INSTANCE - INSTANCE ON POINTS

and to the right

    ADD - INSTANCE - REALIZE INSTANCES

to end up with this layout:

Next will be another node that will define the real, renderable object that will be at the location of each point in the point cloud – for this example a cube.

    ADD - MESH PRIMITIVE - CUBE

Start with giving the cube the dimensions of 0.01 – these figures will likely need to be altered to fit whatever scale the point cloud is suited for.

Link the Mesh OUT to the Instance IN – but first note the following point:

This is where the point cloud will display as a proper renderable form – BUT it can take a long time depending on size of point cloud. Save first, close other applications, etc.

The rendererable point cloud will now show in the viewport (to speed up this view, or to eliminate viewing errors due to location of the light you can choose to view it as a simple solid form by clicking the solid circle icon, top right hand corner).

The X Y Z dimension of the cube in the cube node can be changed – and the model will update accordingly.

4. Finally the colour (if any) of the point cloud can be brought in by adding another node and linking to the material made in step 2:

    ADD - MATERIAL - SET MATERIAL

Drop this on the Mesh-to-Instance link line and in the bottom field type the name the material was given in the first step “PCMaterial”

Even if there was no colour info in the point cloud this node can be useful for assigning a material to it for colour / shininess / transparency etc.

NOTE: Material operations will only render properly with the Cycles renderer. Eevee render will render much more quickly – but it will be flat colours (which may be fine for mono point clouds).

Credits and thanks to Michael Prostka for YouTube guide and development of PLY import

Point Cloud RENDERING: 3D Studio / V-Ray

It is possible to bring point cloud files into 3D Studio Max and use the software’s advanced camera and animation tools to create rich visualisations of the scans. Do remember that point clouds behave differently to standard 3DS geometry – they cannot be directly manipulated here in the same way as a mesh can; lighting and atmospheric effects will be different – point clouds do not cast shadows, for instance. With point clouds it is more a case of affecting that model rather than manipulating it

Note: standard 3DS will allow you to import the point cloud but it will not render it with the standard 3DS renderer – you will need to set the renderer to V-Ray or Arnold for it to render properly.

IMPORT WITH 3D STUDIO MAX

The following point cloud formats are supported by 3DS:

  • .RCS / .RCP – RCS recommended
  • .LAS / .LAZ
  • .E57
  • .PLY

In 3D Studio go to the Create Geometry panel and select from the drop-down Point Cloud Objects

Then select the Point Cloud button below Object Type and click in the viewport to place roughly where the point cloud is to go – this placeholder will show as a wireframe box.[Note: If this just appears as the 3 red/grey axis lines and not the white box then do a File – Reset and try again; there seems to be a bug sometimes when starting a point cloud]

Below Point Cloud Source click Load Point Cloud and browse to and select your saved RCS file – this will plant the point cloud into the placeholder (it will probably come in huge so you will need to zoom out to see it all)

With the point cloud object selected you will be able to see and edit its properties under the modify tab:

Point Cloud Properties:

  • Limit Box: you can drag the walls of this box to crop the point cloud just to the area you want to be seen – it does not delete the areas outside, just hides them
  • Display Colour Channel Dropdown: choose the colour mode for the points; true colour is their original colour. Single colour allows you to pick one colour for all points, etc.
  • Level of Detail Rendering: for performance related issues you can slide this up and down to get a rougher but quicker representation of the model when you are working on it.
  • Fixed in Rendering: if you click this to On and drag the Level of Detail high then when you render it will render highest quality – and not at the same setting you have for the viewport. [though actually rendering and lower quality levels can create interesting results, so it is worth experimenting with]
  • Point Display: this is the size of the points and the most import variable in changing the look – experimentation here is absolutely key; high figures give fuller (and quicker) renders with a really low figures like 0.1 generate creating nice fine-point results. Note: the point sizes will relate to the resolution of your final render – so what looks fine at 640×480 may be too faint at 1920×1080.
  • Limit Box: this is a way of disabling / enabling the limit box if you have used that to crop your point cloud
  • Display Volumes: this is a useful way to choose only a certain area of the point cloud to be visible by selecting another object such as a cube or sphere to define the extent you want to see. These objects can be combined or inverted.
  • Modifier List: there is a huge list of modifiers available in 3D Studio for bending, twisting, morphing etc. however the vast majority of them will not work on point cloud objects as they are not standard geometry.
True Colour – 1.0 Point size | True Colour – 0.1 Point size | Single Colour – 0.1 Point size
RENDER WITH V-RAY

You must be using the V-Ray renderer (or Arnold) to be able to render point clouds; Scanline, Art, etc. will render blank.

The renders will look much richer than the viewport depictions so it is really worth spending some time getting the point cloud settings right – especially the pixel size which has the most effect.

Full point cloud renders do take time to process but V-Ray has a feature to cap the amount of time it will refine the renders for. This is very useful for experimenting with different settings to get fast feedback. For final renders remove the time limit to allow a full render to be generated.

Render time set to 30 seconds | set to 2 minutes | set to fully complete

In the render setup window – select the V-Ray tab and below Progressive Image Sampler the Render Time field is how long it will spend on the render, so 0.5 will mean the render is processed for 30 seconds, 2 will mean 2 minutes. To allow the render to process entirely enter 0 in this field

Creed Monument – Scan Techniques

The monument of the Creed family sits against the North wall of St. Alfege Church, Greenwich. Sir James Creed (1696 – 1792) was an MP and lead merchant and is buried with his wife at the church. This is a marble monument, about 4 metres high – with markings higher up that suggest a metal cross piece used to be fixed to it.

Photogrammetry Scan

By photographing an object from all sides and capturing many images – with enough overlap so they can be tied together – photogrammetry software can create an accurate 3d model of that object. The resulting mesh object can then be edited and used in CAD / 3D modelling software such as 3DS Max, Rhino, Maya etc. Processes could include replacing textures / materials or applying sun and light models to examine artificial shadow patterns.

This model was created with the software Zephyr Aerial 4.5 using 64 photographs taken with an Apple iPhone X in good daylight. The clarity of a high definition photograph enables the model to carry over very fine, close up detail. Zephyr allows for the mesh to be tidied up, cropped and then exported to the Sketchfab website / service which allows models to be zoomed, spun and examined via browser or app (embedded link below).

Photogrammetry lends itself particularly well to constructing museum-grade scans of smaller, closer objects. It can also deal with larger projects though these are likely require the use of extra equipment – drones, zoom lens, etc. – to obtain distant, high up and otherwise hidden spots to sufficiently cover the entire subject.

Creed Monument: Photogrammetry

Laser Scan

LIDAR technology – radar with light – bounces many light rays off objects within a space to measure distances to those objects and build up a cloud of points with accurate spatial data that represent the shapes found. Typically a tripod mounted laser scanner will rotate the beam vertically and the scanner unit horizontally to capture a 360 sphere of data in a single scan. A number of scans are carried out to best capture the space from all points – and eliminate “blind spots”. These scans are combined together – or registered – to create a single unified point scan.

While the density – and size – of the points can give the impression of solid geometry it is important to remember that this model is floating dots – not solids or meshes that can be edited in the same way as the photogrammetry final output. The size of these points can be adjusted to create revealing, x-ray style views through a building. More practically a point cloud survey of a site can reside as a reference layer on a CAD site plan; the very fine accuracy of a laser scan and the distance it can reach being a distinct advantage.

A Leica BLK360 scanner was used to carry out this scan – with three scans about the monument registering into a single point cloud. Each scan takes around 5 minutes and with so few scans the registration process is straightforward – large projects with lots of scans can be a very involved and time consuming process.

Relative to other laser scanners this model has a range of “only” around 50m (the Faro scanners have nearly three times this). The high concentration of light points sent out also mean that – even with tree coverage around a building or landscape – enough of the beams will still get through to record the semi-hidden project behind.

Creed Monument: Laser Scan

iPhone Polycam App

The Apple iPhone 12 Pro and iPad Pro include a Lidar sensor – a feature to enhance the accuracy of distance and measurement for purposes of augmented reality and camera focusing. This feature has also been utilised by a number of developers to create Lidar scanning apps which open up the opportunity for quick, on-the-go scans straight from the phone.

This app by Polycam is one of the earliest and best to exploit the hardware and point to possibilities of this handset based technique.

This is a lower resolution mesh but the high resolution images wrapped around it still give a good impression of the model. Polycam / Lidar sensor will continuously try to correct itself during the scan sweep to maintain alignment – but there are a few tears in this example where the registration has slipped. More careful movement when scanning would help to prevent this. This scan took about 5 minutes.

Creed Monument – Polycam Scan / iPhone

iPhone TrueDepth Apps

Recent Apple devices use the front facing camera – with “TrueDepth” sensor – to capture 3D information for use with face ID authentication and Animoji. This technique involves projecting 30 000 infrared points and reading back a 3d map of the user’s face. Similar to the Lidar apps, developers have utilised this feature to author apps that can 3d scan with it.

Heges and Capture by Standard Cyborg are two good apps that lever this power of TrueDepth to carry out 3d scans.

Although the capture resolution here is very high the range is short which makes it suitable just for smaller, close-up scans. The other big barrier is that since it uses the front facing camera the handset needs to be pointed at the subject – with the screen away from the viewer. This can make it difficult to see what areas are being scanned – though the Heges app does include a screen share feature where the scan view shows on another device. Constructing a rig that can rotate the camera smoothly all around the model is another option too, where possible, to control speed and shake.