Click to embiggen: Catwa head texture displayed in normal, 1024K, and 8K (pics via M. Vanmoer)
UPDATE, 2/13: In Comments, some readers have pointed out that this hack might not necessarily work as described, or could cause performance problems for users -- so caveat SLer. (Then again, the top avatar mesh bodies sold in Second Life already have major performance issues.) Also be sure to read Vanmoer's responses in Comments.
SLer "Frenchbloke Vanmoer" recently told me about a cool Debug setting in the Firestorm viewer that enables extremely high resolution textures to be displayed, with far more detail than is seen by default. (As seen above, and after the break.)
"It's not a bug as far as I know," he tells me, "just that it's hidden away in the Debug settings -- nobody looks in there properly... Some [creators] I have talked to were unaware it existed and were making all their textures for upload from up to 8K sources as 1K uploads."
He says making the change doesn't cause much viewer-side lag: "They are just the usual 1024 x 1024 textures as far as your inventory and viewer is concerned. The main difference is that it looks a lot nicer. It's the same principle as photo resizing as well as audio for broadcast -- the better the quality in, the better it will be when compressed. It's one of the first things you learn when using images -- best to start big and then scale down, than the other way around. I'm curious to see what mesh creators can do with this on uvmapped mesh."
As with any debug adjustment, user beware! Here's the Debug setting Vanmoer is talking about:
"It's the max_texture_dimension_X and max_texture_dimension_Y values -- I have 8K there for testing." (See above.) For best results, Vanmoer recommends putting in 4096, for 4K images.
Here's a couple more test images from my man Vanmoer, showing you how AAA-like you make Second Life avatars look:
If you play with this setting, dear reader, share your results in Comments!
Does this setting allow you to *upload* textures bigger than 1024x1024? I'm surprised it works, then -- I would think that server-side they'd have limits on that, not depend on the client to do it.
In any event, I am having a hard time reconciling the claim that there are no performance hits with any understanding of computational reality. One of the resources that is already strained is video card memory. Too many textures right now are 1024x1024 that don't need to be. If people start uploading textures as 4096 which *really* don't need to be, it will get that much worse.
A 1024x1024 texture uses up 3 or 4 MB of graphics card memory. That doesn't sound like a lot... until you've got a hundred of them in view, at which point you're graphics memory is starting to get choked if you're limited to 512MB (as some users still are). No video game would ever have a hundred textures of that size in view, but it's not that hard to get there in Second Life. I suspect that a *lot* of stores easily have more than that, for example.
In contrast, a 4096x4096 texture uses up 12 or 16MB of graphics memory. A hundred of *them* in view is going to fill up *anybody's* graphics card.
It also takes more bandwidth to download.
Yeah, it's still listed as one thing in your inventory. And, yeah, a few 4096 textures will *not* create any noticable lag. But if you right now notice (as I suspect everybody does) that it takes a while for all of the textures to load when you go somewhere new, then, yes, it *will* take longer if lots of people start uploading 4k textures that you now have to download, *and* things will 'blur out' much more often as your graphics card has to swap them out. (That will also tend to reduce your frame rate.)
Instead of encouraging people to use higher-resolution textures, we should be encouraging them to think about where *smaller* textures would suffice. How many pixels is your texture actually going to occupy on the screen most of the time?
High-resolution textures that look really good when you zoom in do not come cost-free, and it's very misleading to suggest that they do.
Posted by: Bastilla Loon | Tuesday, February 12, 2019 at 06:53 PM
Oops, I made an error in my calculations. A 1024x1024 texture does in fact use 3 or 4MB of graphics card memory. a 4096x4096 texture uses 48 or 64MB of graphics memory. It is a substantial difference! 10 or 20 of them in view is going to completely fill up your graphics card memory.
In this day and age, it simply does *not* make sense to be using textures that big in something like Second Life. The only place it would make sense is if you are carefully designing your scenes so that the number and sizes of textures in them is optimized to make things look good where they need to, and to avoid using graphics memory where they don't need to. This is absolutely not the case in Second Life; things are far more chaotic there. All it would take is a few avatars in view that have 4k textures on skin, hair, and one clothing item to kill the performance of everybody around.
Posted by: Bastilla Loon | Tuesday, February 12, 2019 at 06:58 PM
While I completely agree with Bastilla Loon -----
I did some testing and as I expected, all that setting does is let you upload larger files. The TEXTURE that gets uploaded is indeed the 1024 max.
I am not seeing the point to this post. Most mesh creators (well many anyway) know that baking at a higher resolution will give you a clearer texture. BUT that being said, I tested and learned long ago that since the uploader compression to 1024 gives a MUCH less clear finished texture than RESIZING IN A GRAPHICS PROGRAM and then sharpening in some manner.
BOTH files end up as 1024 and there shouldn't be ANY difference in the time it takes the viewer to render that for the avatar. BUT changing the debug settings is just the lazyman's way of getting a larger bake into world -- and definitely not the best method.
Once again some research would have been a good thing. Not everything you read on the Internet is fact!
Posted by: Chic Aeon | Tuesday, February 12, 2019 at 08:29 PM
I hope people don't read this post and get the idea that they can and should upload larger textures to the grid. That's bad for everyone in the end.
Posted by: Cake | Tuesday, February 12, 2019 at 08:38 PM
I hope creators do something about texel density first.
As far as 'oh noes, I've uploaded something huge and everyone will cry' I've uploaded an 8K texture including materials layers. Then saved them back to my HD to see if they were still 1024 like the file properties said they were.
The whole point is that why should something look like legacy content when it's possible to make them look like something your graphics card would normally breeze past?
Sharpening is a valid point to a certain extent but its basically damage control. Say you have something that has a variety of detail from hairline to fingers width. Sharpening would obscure the finer details. See, my thing is details and to be honest it's not something you see very often in SL. Not sure if it's laziness or lack of interest but it is possible to make SL look like it came from this decade. Of course yould dodge, burn, apply sharpening in just the places you want but it's not going to look the same.
Look at objects creators have optimised, well I say optimised but I have my doubts.
Materials enabled is banded about as a sales pitch when the normal and spec maps make everything, and in some cases EVERYTHING, look like its coated in 4 inches of tar. You get everything from shiny bricks, shiny cement, shiny sandstone, shiny wool, shiny carpet...
I mean, really? Materials and PBR in general are there to give something a bit more realism. Yes Linden Labs approach to their graphics engine is to patch things onto it when it really needs to be replaced with something that your graphics card deserves.
I tried sinespace the other week but without me doing any jiggery pokery with settings I was seeing crisp shadows, surface reflections from nearby objects (have you tried that in Second Life? It's not fun) the sea looked like the sea, reflections galore and not once did I need to defender anything. It just works.
In short, if you don't want to try it, fine. You'll be seeing more lag from mesh bodies and hair - yes that's still a thing than a texture uploaded in a higher resolution which then gets converted and squished back down to the same size as almost everything else.
Chances are that so few will do this and the bad materials and texel density offenders will continue as normal as rendering at hi res takes longer and nobody is complaining.
If you want to see just how good Second Life can get then you really need to see Beev Fallens place. Beev doesn't use these higher resolution upload settings. In some cases, quite the reverse but it remains the best showcase of what Second Life can be.
I'm rambling, probably, however as I mentioned before visually Second life is by and large still in the last decade. Baked in shadows and sunlight you just can't quite match with advanced lighting, a total disregard for texel density and laughable materials. It's no wonder its often laughed at.
Posted by: Frenchbloke Vanmoer | Wednesday, February 13, 2019 at 12:12 AM
I have tested lots of different sizes using this debug setting, and 1,024 gives the best quality which is what I have sued for a while now, anything above that size doesn't improve quality.
Posted by: Carolyn | Wednesday, February 13, 2019 at 08:19 AM
You will always get a better result if you start with a larger image. Basic photo editing 101.
https://www.flickr.com/photos/galleriedufromage/46765224742/in/dateposted-public/
It's very noticeable with normal maps.
That image there is the same image resized and uploaded as 512, 1024 and 4096.
Look closely at the 1024 and then the 4096. You will see that the lines are more visible and defined.
They are sourced from the 8k file.
I tested this for a couple of weeks before seeing if anyone else I knew could see what I could. They could.
If you're happy with the status quo then. By all means use what you prefer.
As I mentioned before I like the details. The devil is in the details.
Second Life, if you want it to, can look amazing. However because of how Linden Labs made it, you have to put in a lot of effort and time.
You can create mirrors that reflect the environment back at you. But you have to cheat in order to do so. You can make rain that reacts with the light around it, make things look a lot like their RL counterparts.
Or you can stick with great mesh builds with texturing comparable with 256x256, shiny everything just because, SHINY. (hello Anxiety) Laugh in the face of texel density with multi resolution textures on one object (hello Nomads swimming pool)
Posted by: Frenchbloke vanmoer | Wednesday, February 13, 2019 at 01:03 PM
When I first read this blog post I thought it was a load of cobblers.
But Frenchbloke is actually correct! The blog post just explains it badly.
This will also work just as well on the Linden Lab viewer & probably every other viewer. Those debug settings are from the Linden Lab viewer, they are not Firestorm specific.
It's easy enough to test for yourself.
Get hold of a super high quality 8192 image with lots of fine detail & make sure it's saved in a lossless format - I used PNG in my test.
Set the debugs max_texture_dimension_X and max_texture_dimension_Y to 8192.
Upload your 8192x8192 image.
This image will upload and the uploader will resize the image to the maximum allowed size of 1024x1024 - this limit is enforced server side, there is no way around it.
Now open your 8192 PNG in Photoshop & reduce the image size to 12.5% using the Bicubic Sharpen (best for reduction) & save this image.
Set the debugs max_texture_dimension_X and max_texture_dimension_Y back to default & upload your image.
Compare these 2 uploaded textures side by side.
The 8192 texture resized by the uploader is much sharper, has less noise & the highlights pop more then the texture that was 1024 before upload.
I'm quite surprised the uploader makes a better job of the resize then Photoshop tbh.
I asked one of the Firestorm developers what was going on here & they are still investigating, but initially the thought is that this trick has more validity then it first seems.
You give the viewer a texture to upload & it's converted to a 1024x1024 j2k texture (JPEG 2000).
If you upload a 1024, the j2k compression goes through and does lossy compression on the data you give it.
If you upload a 8192 (allowed by changing those debug settings) the viewer will apply the same lossy compression, except this time it has 8 times the quality on input with which to work.
Also to note, using this trick will not cause any extra "lag" - the uploaded 8102 x8192 image is still 1024x1024 after upload.
Whirly Fizzle
Firestorm support & QA.
Posted by: Whirly Fizzle | Wednesday, February 13, 2019 at 01:51 PM
https://prnt.sc/mkrtd3
This image shows 2 textures uploaded on the LL viewer side by side viewed in the texture preview floaters.
On the left is the original 8192 PNG image uploaded by changing the debug settings.
On the right is the same PNG reduced to 1024 in Photoshop using Bicubic Sharpen (best for reduction) before upload.
It's pretty obvious the 8192 texture on the left is better quality even though both images after upload are 1024x1024.
Also note that the Photoshop resized 1024 image does not have all the compression artifacts you can see in the right uploaded texture when viewed in Windows photo viewer.
Photoshop did a fine job at resizing the texture.
The viewer upload/compression/resize clearly reduces quality much more when uploading the 1024 image then it does when uploading the 8192 image.
Posted by: Whirly Fizzle | Wednesday, February 13, 2019 at 02:22 PM
Neat trick :) (yes was a bit sceptical until it was better explained).
Posted by: sirhc desantis | Wednesday, February 13, 2019 at 04:52 PM
After a certain amount of digging I can explain the reasons why this works, and it has nothing (well not nothing, but only a little) to do with the debug setting.
TL;DR conventional wisdom on image reduction is wrong, yes adobe I'm looking at you. Or I guess to be more precise, it is wrong for the set of values that we as SLers have, it may well be correct for some broader terms.
To go into more depth I've put this blog up https://beqsother.blogspot.com/2019/02/compression-depression-tales-of.html
Beq - The aforementioned FS developer.
Posted by: Beq Janus | Wednesday, February 13, 2019 at 07:09 PM
@Whirly
a FYI about why
LL uses the Kakadu jpeg library. Kakadu will generally compress a lossless image file down to a smaller jpeg size, far better in both visual and compression ratio than any other jpeg library, incl. commercial paint programs that don't use Kakadu
general meaning the sum of a broad range of images, compressed then uncompressed
another FYI, don't ever upload a jpeg file to SL, always upload a lossless file (TGA, PNG, etc)
when we upload a jpeg file (and any other image file) then SL uncompresses it and then recompresses it, the effect of which is loss of quality over the original image. Example. 100% lossless image. Export from paint program as jpeg at say 80% quality. Import jpeg file into another Jpeg compressor and export: 80 * 80% = 64% quality of the original lossless image
Posted by: irihapeti | Thursday, February 14, 2019 at 01:15 AM
Hamlet -- based on Beq's comment above, I'd recommend adding another update to your post.
The improvement has nothing to do with higher resolution, and everything to do with resampling method used to reduce an image from the original source down to 1024 (or 512, or 256) before uploading. It just turns out, based on Beq's post, that the resampling method used by the viewer when it reduces an image size turns out to be better than what most folks have been using in their image processing programs. Change one setting in the image processing program, and you have exactly the same benefits.
The quick answer is: bilinear resampling (which the viewer uses) turns out to be better for image reduction, at least if you're using the image for SL textures, than bicubic resampling (which is what Photoshop recommends, and what the Gimp defaults to (or at least what I've got it set to)). I'm surprised by this -- I would have expected bicubic to be better. But that's what experimentation seems to show.
Beq's linked post goes into details.
irihapeti is right that you should always start with an image using a lossless compression formation. Reencoding a image with lossy compression to another lossy format will, at best, do nothing. If you're lucky, you'll never notice. But it will generally introduce new artifacts, at least small ones.
Posted by: Bastilla Loon | Thursday, February 14, 2019 at 05:20 AM
Actually, thinking about what the two different algorithms do, here's my prediction:
bicubic reduction is probably visually better for images that are entirely smooth. However, it's worse for images that have "sharp" features. That is, edges, or small dips (like the scar in the image above, and the indentations in the test image in Beq's post.
Reason: bilinear interpolation just looks at the nearest pixels. At worst, something sharp is just going to be a bit blurred out. Bicubic interpolation looks beyond just the very nearest pixels. In the case where you have a continuous and continuously differentiable function describing your colors, this is almost always going to be better. (This is why, for instance, higher-order numerical methods for solving differential equations give smaller errors.) However, it breaks down when you have sharp edges. If you have a sharp drop from pixel i to pixel i+1, then the cubic that fits there will be distorted on into pixels i-1 and i+2 (and perhaps beyond) as it tries to fit a smooth function to a sharp feature. I'd expect you'd get some "ringing" around those sharp features. An analogy would be the splash and mess you get at the bottom of a waterfall as the water tries to flow continuously around the discontinuity, whereas a gentle slope (uninterrupted by discontinuities like rocks) can keep a nice smooth stream of water. In contrast, the linear method -- at least assuming it's the most obvious linear method -- only needs the very closest pixels to deal with the sharp drop, so its affects don't propagate around as much.
I also suspect that visually, you're not going to easily see the differences between bilinear and bicubic for smoothly varying images. So, using bilinear for everything is probably the best choice.
Posted by: Bastilla Loon | Thursday, February 14, 2019 at 05:30 AM
(Effects, not affects. Gah. Embarassing. I had a spurious quotation mark in an earlier post as well. I need to Preview more.)
Posted by: Bastilla Loon | Thursday, February 14, 2019 at 05:32 AM
OK, sorry, all this does is to remove the upload limit of max size 2048 upon upload. All files no matter how big are resized to 1024
Use bulk upload option it won't give you the max file size warning, but will of course, still down size to 1924 max. I remember this from a while back when I was experimenting and you still need to change x and y, or you can get a warning that says 'cannot upload image larger than 4096*2048
I always just use the bulk upload as I just dont much feel like resizing a finished image just for upload. Unless there is something going on behind the scenes I don't know about..
Posted by: jackson redstar | Thursday, February 14, 2019 at 07:57 AM
I came back to this post to see what was up, read all the info as well as Beq's long and detailed post. Thought I had really learned something and was happy.
BUT ???
Something that I think folks forgot about (or I missed in all the lengthy discussions and if so I apologize for assuming) is that while there is THEORY and even testing and examples, there are other factors that need to be taken into consideration.
One is software and settings within the software (besides the bilinear ). I am using the latest version of Corel and within that there is a sharpness setting with slider. How that is set will of course make a difference. Also there is the bake resolution (for mesh bakes of course, not photos), the dpi of the texture setting etc. etc.
But clearer is better and so of course I did my own tests.
For ME, the bilinear was worse, not better (same sharpness setting with all other settings also the same) than the bicubic.
That being said, the UPLOADER DID DO THE BEST JOB -- so for me again -- that is good to know.
Am I ever going to bake and upload an 8K texture to SL? Doubtful. I did that ONCE for Sansar for a landform texture that was well over a SIM in size. It took forever on my pretty hefty computer. But as Beq said, it does open up possibilities for getting good and sharp SMALLER textures :D.
Posted by: Chic Aeon | Thursday, February 14, 2019 at 03:24 PM
Need to clarify.
Sansar's largest texture upload (except for skyboxes which are actually "skies", not living areas) is 4096. An 8K texture is 64 times larger than a 1024 -- if I did my math correctly. So pretty much 64 times longer to bake -- a go to bed and see it in the morning thing for me.
Posted by: Chic Aeon | Thursday, February 14, 2019 at 03:49 PM
There is now a discussion about this over on the SL forum
https://community.secondlife.com/forums/topic/433164-bi-curious-you-should-be-or-why-your-assumed-wisdom-may-not-be-correct/
Posted by: Whirly Fizzle | Thursday, February 14, 2019 at 05:07 PM
Depth of field simulates real life blurring that occurs when things are not in focus. For example, if you are taking a photo of a person, then typically, things in the background will be blurred; the futher away, the more blurred.
Posted by: myprepaidcenter | Wednesday, February 27, 2019 at 01:54 AM
For example, if you are taking a photo of a person, then typically, things in the background will be blurred; the futher away, the more blurred.
Posted by: Baseer | Thursday, August 29, 2019 at 10:36 AM
The difference seen is due to the JPEG200 algorithm being applied to the 8k version, the result of which is then downscaled to 1024 for the upload.
Compare this to downscaling the 8K to 1024 in Photoshop FIRST (doesn't matter with which algo) and then applying the JPEG200 algorithm upon save.
The first version will always come out objectively superior, given this setup. Very nice!
Posted by: Kerrang | Monday, March 18, 2024 at 03:08 PM