Fixing broken “springs” on FirmTek trays

After a couple of years of use, the “spring” in the catch lock of the Firmtek trays gives out, and you have to manually push it up to engage the latch.

(See trays here: http://www.firmtek.com/seritek/seritek-2en2/)

I asked, and they will not sell replacement parts, nor will they fix the tray if it’s over 1 year old.

So, I tore it apart myself.  Not hard to do. (Please read all instructions first, before proceeding.)

0) Basically, we’re going to replace the “spring” with some spongy packing foam.

1) remove the three screws that hold the plastic front-piece on the metal tray.

2) slide the front-piece off carefully, observing the light-guides for replacement.

3) the lock/slide mechanism is fit into place and held there by its sides, which snap over small nubs. To release the lock-piece, use two small screwdrivers,(or toothpicks, whatever…) one on each side, to pry the side pieces away from the container, just enough to allow them to pass over the nubs.  The lock-piece pulls straight out.

4) you’ll see that the “spring” is really (in an example of poor engineering design) just two pieces of plastic mold, and that over time, they have weakened where they meet the main body.

5) snap off these two pieces.

6) In their place, insert some scrap of springy packing foam, or a bit of sponge, or something similar which can be compressed and then bounces back. You’ll have to cut it to size, and jam in enough to make sure you can both compress the unit, and have it spring back. This will take a bit of experimenting to determine the correct amount of foam, since various foams have different compressions. The foam goes where the “springs” were, and pushes up against the bottom of the lock.

7) Slide the lock-piece back into place, using one of your screwdrivers to ensure that the foam stays properly placed beneath the metal lock.

Test the resulting springiness and mechanism.

9) Observing the light-guides, replace the assembled front-piece and the three screws that held it in place.

The result depends on the foam, and your skill at finding the correct amount used to replace the “lever-springs.”

The next time I do this, I’ll take photos and post them here. Meanwhile, I hope this explanation will suffice. It should be pretty obvious once you get going.

hth.

On UPS (Uninterruptible Power Supply)

First, there are several types, but I’ll point out that they fall into those that produce a “pure sine wave”, and those that produce a “stepped approximation of/to a sine wave.”

You pay more for the former, and it’s easier on your equipment. In fact, some of the newer power supplies (APFC) in computers will not work with the “stepped approximation” type.

If you are using it with high-end audio equipment, I strongly recommend the pure sine wave” UPS. (CyberPower makes fine units at reasonable prices.)

That said, here’s something I’ve apparently learned over the past few days. (“Apparently,” because it’s anecdotal…)

It appears to me that UPS’s which have some way of indicating the state of the battery charge, may not truly be indicating it.

One of my APC units showed a full charge, which should run my equipment for 20+ minutes. However, a practical test showed that it only ran for 4 minutes. When I completely discharged the batteries (using an incandescent bulb, which I didn’t own, and had to run out and buy one) and then recharged the unit, it (still showed) a full charge, but this time, when I tested it, said it would run for 17 minutes.

Next, I got my new UPS, which, upon removing it from the box, showed the batteries at 100% charge… and the printed caution to charge the batteries for 8 hours before using (which I’m doing now.)

Now I can’t verify that the batteries in the new unit were not fully charged, but that would be most unusual for a new, unopened device.

So I’m left with the anecdotal “evidence” that it appears that a LCD unit can provide false information about the true state of the batteries, just as older battery testers did, when they failed to run the test under some built-in load.

Why say this?  Because if your UPS isn’t providing the run time you think it should, let me suggest that my experience says:

Run down the unit using a 100 watt light-bulb*, and recharge it for 8 hours (ie overnight) and see if things have not improved. I’m not swearing they will, since I don’t know the state of your unit, or its batteries, but it’s worth a go.

*Why not use something else? Because the bulb will drain the batteries to a lower level that something with higher wattage (before the self-protection on the unit cuts off all power.)  For reference, my 100 watt bulb actually drew 85 watts, and took 74 minutes to drain my 1300VA unit.

hth

Tracy

On Fonts

I was asked about fonts and font-management by someone who “has tons of fonts.”

If you really have “tons of fonts” then that could be having an   effect on your system. I personally keep my -active- font   collection pared down to a minimum, and use Font Agent Pro to   activate other fonts as needed. For example, one seldom needs   “headline” (aka “decorative”) fonts active all the time, and those   constitute the bulk of most font packages. With software that auto-  activates the font only when it’s needed, you can remove some of   the sludge from the system during normal use.

Such font software will also check for font problems (and there is   also the excellent Font Doctor which does the same thing.) Font   book also has an implementation as well, although I”m not sure if   it will repair broken fonts.
Fonts are considered part of the operating system, and a damaged or   corrupted font can cause all kinds of mysterious issues. I’ve   experienced this over the years myself, hence my recommendation,   above.

I generally leave the Microsoft and Adobe font packages alone   (unless, of course, Font Doctor finds a corrupt font) and try to   keep my /Library/Fonts folder down to what Apple supplies and a few   others that are needed by specific software, such as the Britannica   et al. Equally, I try to keep ~/Library/fonts tiny too.

I do have some 1500 other type faces (many of which are “headline”   fonts) that I’ve acquired over the years (and have converted to   OTF, to stay up with OS changes) but these are all kept in a   separate folder, organized and activated only as I need them, using   Font Agent Pro.

(I was then asked about what was “the best” font software, to which I replied:)

Can’t reply as to whether or not Font Agent Pro (FAP)is “the best”  because I’ve been using it so long that I’ve not kept up with the  others.  If you already have software that is current and manages  fonts, you probably ought to use that, as I’d assume the differences  between packages is minor.

By way of reference, I have about 90 font-families / 140 fonts active  on my computer right now. FAP lists 4247 fonts in my collection (which  probably represent about 1200 families, I’d guess.)

As to using a font manager vs not: yes, it is far simpler to not use  one.

Changing from one manager to another isn’t particularly difficult, but  can be stressful if things don’t go right.
Generally they keep the actual fonts organized in a separate folder.  Changing then would entail uninstalling the software, and moving all  those fonts to (say) your user/library/fonts folder, and then running  the new font manager. The font manager will gather up and redistribute  the fonts according to it’s own needs.

That said, most font managers will let you create your own catagories  and arrange things to your heart’s content. Converting -that- from one  manager to another is pretty much impossible, if my experience is  telling, and a serious time-sink to convert.  For that reason, I  generally don’t bother; large type houses and grapic design firms do  bother.
Personally, I’ve found FAP to be more than adequate for my modest  needs, but, as I noted, that likely applies to whatever you have as  well. Just make sure you have the latest version before you go mucking  about.

As to how to proceed: I’d get a clean install of the system fonts from  somewhere, and move your /library/fonts folder out of the /library and  install the clean /library/ fonts.

Do that for your ~/library/fonts as well.

Then run the font management stuff, and include those moved folders.   As it is right now, on my setup, I have a “MyFonts” folder at the root  level of my hard drive.

hth

On Drive Repair Utilities

Most people understandably don’t have a clue about how a hard drive stores information, and therefore don’t have a clue about what it takes to repair it when something goes wrong.

Think of a hard drive like a book. There is a table of contents which points at the chapters. Let’s say that chapter 2 begins on page 30; chapter 3 begins on page 45 and chapter 4 begins on page 70.

Now, if the Table of Contents gets messed up and says that chapter begins on page 72 (instead of page 45), you’re going to start reading in the wrong place.

(Obviously here, I’m equating the actual chapters to files on your hard drive, and the Table of Contents to the drive directory “the catalog.”)

So far that’s not too bad, and products such as DiskWarrior can easily go thru the actual “book” and find where the chapters begin, and replace the defective table of contents with a new one.

This is by far the most common corruption on a hard-drive, and the easiest to fix.

However, in reality, the situation is considerably more complex, because instead of each page in a given chapter following one after the other, they are scattered all over the book. In the worst case scenario, no two adjacent pages are actually adjacent to each other.

So, in order to keep track of that mess, there is a hidden list of where things are “attached” to the chapter listing in the table of Contents. In order to “read” a chapter’s worth of pages in sequence, the computer has to refer back to this hidden list (called a B-Tree) to find each page. These B-Tree point at “Extents” which are contiguous blocks of space on the drive

Already finding and fixing a corrupted directory becomes far more difficult that our first simple example.

But still doable, based on certain assumptions…

But we’re not done yet… because this is no ordinary book! The fact is that the reader can write new pages and add them, or delete old pages. Now the process of keeping the contents up to date is even more complex… and made harder because that list of extents may run out of space. So if that’s the case, there is a structure created, call the Extents Overflow, which is also examined to find where the rest of the pieces of the file might be

You’ve got lists pointing at lists of lists which point at pages in chapters or documents…. it’s “non-trivial” as they say.

NOW… supposed you delete one of those pages. Heck, delete several of them (which you think of as series of contiguous pages, but which are in fact scattered all over the drive) and you have “holes” in the lists of lists…. all of which have to be updated to indicate that they are now empty space on the drive, and can be used for something else.

Whew! NOW… you go add a new file or report or install a new program, and some of it goes in those holes. Well… it goes where the lists -say- that the holes are.

However, if something goes wrong; the power goes out; some installer doesn’t work right… and one of those lists ends up reporting that there is free space when in fact that space is actually used by a program or file…and the new program overwrites it (because the list incorrectly said it was empty)… well NOW you’ve got a real mess.

You’ve got a totally corrupted hard drive, and no amount of Disk Warrior, Drive Genius, TechTool Pro et al is likely to succeed in fixing it. That’s because there’s data where the lists say there is supposed to be data, and data where the lists say the space can be reused. (Deleting things does not do anything at all to the data: it just changes the lists themselves.)

Some repair utilities have routines that can handle -some- of this kind of corruption, but only if it happened THIS way and not THAT way, while others can handle those lost chains only if they happened THAT way, but not THIS.

And if you run too long with a disk that has corrupted lists of what is free space and what is not… if you run with those list wrong, then as time goes by, the corruption becomes worse and worse. Eventually you’ll get to a place where absolutely no software will ever be able to repair your drive… and you’re toast.

So: that’s why anecdotal reports of Drive Genius fixed something that Disk Warrior couldn’t, don’t mean diddly squat. Had the corruption been the other way around, DW would have fixed it and DG might have failed.

The fact is that the products that are out there now: Disk Warrior, Drive Genius, TechTool Pro, DiskTools Pro… all of them are good at what they do; most of them do fundamentally the same things; and all of them do those things slightly differently.

So anyone saying, as a general, overall statement “Disk Repair Product A works, and Product B doesn’t, so never buy Product B” simply doesn’t understand what is going on.

-Most- of the time, for simple directory corruption, any and all of those products will repair the catalog. Because simple corruption is the most common thing, and because Disk Warrior has chosen to concentrate on that, it has a justified excellent reputation. But there certainly are things it will not fix that the other packages will.

I hope this helps explain why you see so many “A works, B sucks… NO NO: B WORKS, it’s A that sucks!” comments out there.

HTH.

___________________________________________

Tracy,

>Thanks for another of your great contributions to this list.

>It brings up a few questions:

>If a drive is cloned, via SD or CCC, does that make files more contiguous?

Very good question. If you begin with a blank destination drive, then the answer is “yes.” This is a -much- faster way of “defragmenting” a drive (than to use defrag utilities.)

(CCC offers a “block copy” which is an -exact- physical-layout copy of the source. Using it, therefore, dutifully copies all the fragmentation of the original.)

>It used to be recommended to drag copy a main drive to a blank drive, erase the main drive, then drag copy everything back rather than using one of the utilities to clean this up. Would either method help reduce this problem?

Drag and drop copies of a boot drive will not result in a bootable drive under OSX. File links and structure are broken, and the system will not operate.

>Is a reasonably frequent restore from backups a usable method to lower this kind of corruption? Obviously, this cannot correct a power drop failure to write correctly.

Not really. You’d not want to clone back something you did, say, a week ago, because you’d be losing all the files that were newly created during that week.

Fragmentation is perfectly natural, and not to be feared. The HFS+ system does a grand job of keeping it under control. Defragmentation is normally not needed on a Mac (or at least -very- rarely.) The only common use for it is with data drives devoted to huge files, such as video files which are being edited. A severely fragmented video file will not stream thru the editor as smoothly as a defragmented one.

>One could infer from the article that using several (DW, WG, TTP) might not be a bad idea. Do they play well with each other?

Hmmm… as a “pro” with clients, I have one of each. Whether or not that’s a great idea in general is slightly up in the air. If money is not an issue, then having each wouldn’t hurt… with a caveat that comes from the second part of your question: it’s been my experience that “ganging them up” one after the other with the thought that one might fix what another missed, isn’t the best of all possible ideas.

A few years ago, before these had all matured, it was definitely a -bad- idea, since it frequently ended up making things worse. Now that they have matured, I’d be less hesitant about it… but only “less.” They each have various different routines to handle situations, and it’s almost impossible to say that they will or will not “mesh.”

My own approach is to use one, and then see if the issue is fixed. If it is, I’m done. If not, then I’ll try it again with the same software. If the issue still remains, only then will I try another. YMMV.

Someone asked if my explanation means that one should “regularly defragment your drive”…

Absolutely NOT! First, Apple’s HFS defrags many frequently used files automatically. Second, the fact that the file structure is complex doesn’t mean that defragging makes it that much less complex. Third: defragging is dangerous to your data, and should never be done at all without a current, full, bootable, backup. Fourth, if you’re going to create a clone/backup, you can defrag faster and safer by just wiping your drive and cloning back. Fifth, drives and the HFS are fast enough that you won’t likely notice any difference, unless – Sixth, you’re doing video or audio editing and you have a drive dedicated to it. THEN, you’ll find A/V will stream faster into your editor if the files are contiguous.

So, what do you use?

I own all of them. Some people say that Disk Warrior does what no others do – and that’s not correct. In fact, the analyze/rewrite catalog was first done my TechTool Pro years ago. Now they all do it. That said, Disk Warrior has concentrated almost exclusively on that aspect, (although they’ve expanded their tool in later versions) and pretty much have directory corruption repair under control. Because of that, and because it’s generally thought of as a “one-trick-pony” most folks (including me) go to it first.

That said, Drive Genius and TTP do the same thing… and they do more as well. I like Drive Genius for a quick check of the drive and for block cloning. It has other features which are used less frequently, and seems to perform well. TechTool Pro and DiskTools Pro also have their uses.

Each of these programs overlap the others to some extent. Do you need all of them? Probably not, unless you’re doing repairs professionally. The general consensus that Disk Warrior is the go-to tool is correct for most cases, since most problems are of the kind it repairs with great success.

But honestly, the others will do the same thing, and offer more features. The thing is, as I’ve explained above, there IS NO “one best” and I wouldn’t hesitate to recommend any of these.

resolution, part 1

(This is a four-part series on picture/image, screen, and printer resolution and how they all work together, with the hopes that it will clear up Dots Per Inch (dpi), Pixels Per inch (ppi) and why things look different on your computer monitor than they do when you print them out. Taken from a list I’m on.)

On Nov 28, 2009, at 11:24 PM, Mark wrote:

Tracy,

While you are at it and if you don’t mind, an ordered, complete explanation, like you so wondrously can give, of printing, viewing and other designations, especially LPI and PPI and how they all fit together in the whole picture would be a super help.

Thanks for this fascinating thread.

M

well…. 🙂

There are whole chapters in books written on the subject, so it’s much easier to just answer individual questions than it is to spend a day trying to compose a lesson in it!

How about an outline? (With significant technical oversimplification…)

Digital images are really long lists of numbers. You cannot hold up a file to the light and see an image. The numbers come from an interpretation of a light-sensor’s value. Those are called photo sites, and it’s an incredibly tiny bit of electronics that emits a current when it’s struck by photons (light). The more photons that hit it, the more electricity it gives off. (OK: strictly speaking this isn’t really what’s going on, (it’s more like a photon receptor/bucket) – but let’s live with this version anyway.)

The light sensitive part of your camera (what used to be film) is a flat plate of millions of these things, arranged in a rectangle. So, for example, if your camera is a 12-megapixel one, then (let’s say) the checkerboard of that plate is 4000 x 3000 (since 4000 x 3000 = 12 million).

So, you’ve got 12M little sites each giving off some amount of electricity. Notice that this doesn’t have anything whatsoever to do with colors yet… just 12M different levels of electrical current.

To get color out of that, the little photo sites each have a colored filter on top of them, either red, green or blue. (There are twice as many greens as the other two, but don’t worry about that.) They are arranged in a nice pattern (the Bayer pattern) like a checkerboard. A red filter only allows the red wavelengths thru; the blue only allows blue, and green allows green. (Why RGB? Because that’s what our eyes see, and the combination of all three make up all the other colors.)

So, the net result is that you get a current which represents the amount of red, green or blue light (depending on the filter) for any given cell.

When you click the shutter, the light exposes all 12M photosites, and the current is captured by an Analog to Digital (A/D) converter, which has the fairly simple task of assigning a number to the amount of current captured at each site. In little cameras, that number can be between (say) 0 and 1000, or in more expensive cameras it can be between 0 and 16000. (What we’re really talking about in doing that is how small a difference can be accurately recorded: do you have 1000 shades or tones or do you have 16,000 of them?)

So what you end up with then, inside the camera, is a file which is a huge table of these 12 million numbers, each one representing one photo site.

Now, one of two things happens: either that file is left alone and just sits there on the memory card, without any further interpretation. In that case, it’s in its ‘raw’ form – unchanged… or… the software built into the camera starts massaging it. It will bump up some numbers based on the other numbers that are near by, (“sharpening” the “image”); adjust the numbers based on the overall “temperature” (color temperature) and all kinds of other wonderful things you can do with a computer and numbers. (Yes: inside your camera -is- a computer.) All that is well beyond the scope of this little ditty… except this one thing: if your camera is going to give you a “jpg” file, then it also compresses that whole range of numbers (0-1000, or 0-16,000) down into 0-255.

Yes, you’re right: it throws away a huge amount of data.

And one way or the other (raw or jpg) you eventually download that list of numbers to your computer.

I’ll break there for tonight. It’s well past my bed-time, and I’ll resume some time tomorrow, and cover PPI (Pixels per inch; Lines per inch; LCD and CRT screens, and output for the web, photos and books.

Meanwhile, here’s a definition: A pixel is a picture element; the smallest “dot” of information. One of those numbers from that table of numbers we just downloaded from the camera. (Again, that’s not quite the proper definition, but it’s close enough for this discussion.)

Mañana.

Tracy

resolution, part 2

OK… continuing (this too will have to be modest, as I’ve just discovered an issue with one of my websites that needs fixing today.)

If you were really paying attention last night, you might see something wrong with the definition I signed off with:

Meanwhile, here’s a definition: A pixel is a picture element; the smallest “dot” of information. One of those numbers from that table of numbers we just downloaded from the camera. (Again, that’s not quite the proper definition, but it’s close enough for this discussion.)

The problem? When we get that file of number loaded into a program on our computer, each “pixel” isn’t red or green or blue: we see it as orange, or yellow, or tan, or some other color, one of perhaps millions or billions of shades or hues.

In fact, we have to separate out a true “picture element” from it’s components. Remember that I said you combine red, green and blue to make a color? (To save typing, it’s RGB from now on…) But each of those RGB numbers from the camera file are individual, single numbers, from different individual photo sites.

So, what’s going on? Well, for each one of those different individual photo sites (which are either R or G or B) the computer looks at the sites around it, and decides from those surrounding sites what the likely amount of the other two colors will be. It’s called “de-mosiac.”

Whoa! What? OK: think of a checkerboard again. Each square is alternating one of three colors: RGB. Mentally impose the whole image over the whole checkerboard. You end up with an image made up of only three colors, and they are resting alongside each other. Not remotely realistic.

Now, look at just one of those sites. Say it’s blue. Next to it is a green one, and a red one, on the sides and top. What the computer (either yours or the camera’s) does is look at those surrounding sites and extrapolates how much G and how much R must have been on the B site (based on how much is actually on the sites that surround the B site.)

Yep: in a sense, it “makes up” the missing colors for each site.

So now, when you open one of those files of numbers in Photoshop, you can click on the “Channels” window, and see, or any Pixel on the screen, the Red channel, the Green channel and the Blue channel, and at the top, the combined RGB channel, in full living color (the combination of the three RGB channels).

When you get a jpg file from your camera, the calculation is done for you, so that each pixel comes ready with three numbers, each representing one of the three primary colors. If you use raw files, however, your desktop computer will do that computing, under your control. Either way, you end up with a point (one of those 3000 x 4000 points) that has three numbers attached to it.

As you can see, we’re still dealing with numbers, however. In fact each of those channels (RGB) is just a list of numbers, which your computer can -interpret- as being red or green or blue. They are just numbers, however. You can tell the computer to interpret them however you like. In fact, when one “converts” a color image to Black and White, all that’s really going on is that you’re telling Photoshop “don’t interpret the channels as colors; just interpret it as bright / dark (called “luminosity”).

What do I mean by “interpret?” Pretty simple: if you are looking at say the Blue channel, and the number in the file for some pixel happens to be 30, the computer will turn on the Blue on your monitor to a level of 30. If the number is 255 instead, it will tell the monitor to blast out that blue pixel with full brightness; if the level is Zero, it will tell the monitor to turn off the blue. Just like turning up or down a dimmer lamp. The higher the number, the brighter.

Zero is off – black. 255 is fully on (either R, G, or B).

Colors besides RGB however, are a combination of the RGB primaries at various individual levels. For example 103, 50, 117 are the RGB values for Violet; 255, 255, 0 are the numbers for Yellow (red + green = yellow.) 128, 128, 0 is also yellow, but it isn’t as bright. And any place where all three numbers are the same, is a shade of gray.

So how does the computer monitor make those colors? Just like I said: if you look with a magnifying glass at your computer monitor, you’ll see that it’s pretty much the same thing as the sensor in your camera: there are little squares (or circles or ribbons) which are either red, or green, or blue, all packed tightly together next to each other. So, if you have a photo which as a big red wall, then the pixels where it’s Red have the RGB numbers 255,0,0 (full red brightness; no green and no blue) and the monitor will go thru and turn on all the red squares, and off all the green and blue. Why does it look solid red to you when in fact 2/3 of the little squares are turned off? Because the square are so little that you eye blends them together.

And that’s why if you have a red square at 103, next to a green square at 50, next to a blue square at 117, your eye sees a violet dot.

In short, if you skip over all the discussion, then it’s really like this: a red photo site on the sensor puts out a number value based on how bright the light was that hit it. That number is then use to turn on a red square on your computer screen to the same level of brightness.

Red(camera) -> 128 -> Red (monitor)

end of part 2

back later.

T

resolution, part 3

Bruce asked some questions (below) that can easily be answered now, and not really as a detour, either. (The answer to part two of his question, about paper, will come separately.)

But, as to how the eye/brain reacts to dpi/ppi…

First Dots per Inch is strictly a printing term, and should not be (but often is) used when discussing computer monitors. Frankly, it’s hardly a major issue, since in casual conversation we generally know if we’re discussing a computer screen or a book… and in those kinds of practical terms, they pretty much mean the same thing. (I’t sjust kinda nice to use the correct words…)

Generally speaking, however, pixels refer to transmitted light, and dots refer to reflected light. So, you’ll have pixels with a digital camera and a scanner and a monitor, but your photo prints, books and magazines (which you can’t see in the dark because there is no light to reflect off the page) are “dots.”

First, let’s look at your monitor. It’s probably right around 100 PPI. (I’m going to limit this to LEDs for convenience.) That number is fixed. (The iPod Touch is 160 PPI, which is why that screen is so easy on the eyes despite it’s small size.)

So, what it really comes down to is the native resolution of your monitor, and it has almost nothing to do with the “resolution” of the picture. (The other thing is how far away you are from an image when you see it, but I’ll get into that later.)

OK: new word – ‘resolution.’ Strictly speaking a digital image has no resolution in and of itself. It simply has dimensions: 640 x 480; 3000 x 4000 and so on. “Resolution” is “resolving power” and a photo can take it on only in comparison to something else, such as the size of the subject of the photo, or the photo when shown on a screen or printed in a book.

Let me get the first one out of the way, er.. first. If you take a photograph of a penny with a macro lens, so that the penny fills the entire image, the a sensor that has 12,000,000 “pixels” (photo sites) will resolve more detail than a sensor with 30,000. You’ve divided the image up into many more discreet points of information. Thus the 12MP camera can be said to have greater resolution than a camera that takes a 640 x 480 (1/3MP) image.

But the photos -themselves- in either case, do not have “resolution” – just dimensions.

Now you can start talking about resolution in the second sense when you start specifying size (in much the same sense that we specified the size of the penny object).

And that’s where the PER INCH as in DPI or PPI comes in.

Let’s work with the 3000 x 4000 12MP image. That’s a fixed size. It never changes in this example.

If you decide to print that image out at 300 dpi, then the width of the printed image will be 3000/300 or 10 inches. Each inch of the resulting print will have 300 of the original 3000 pixels allotted to it.

Thus the resolution of that -print- is 300 dpi.

If you decided to print that very same file at 150 dpi, then the resulting print would be 20 inches wide; and if printed at 75 dpi, it would be 40 inches wide.

But the resolution at 75 dpi is 1/4 what it is at 300 dpi. Bigger print; lower resolution.

And now we get to viewing distance, and the human eye. Take that first 10″ print an put it on the wall 10 feet away from you. Does all that exquisite detail do you any good? Nope: it’s lost because you are too far away to see it in that little print. But, put that 75 dpi, 40″ wide print on the same wall, and suddenly you can see things you never saw before. It will look wonderfully detailed…. yet viewed up close it will look terrible, and only the 10″ one will look good held in your hands.

If you’ve ever seen one of those JumboTron stadium-sized monitors up close, you’ll find that each dot/pixel/point is about 1/2″ in size. Yep: 2 dpi. But that’s fine: no one ever looks at it from 2 feet away; 200′ is more like it.

How does that work? Hold your thumb and forefinger about an inch apart, at arms length.Put a ruler just touching them. How many 1/4″ marks can you see between your fingers? Should be four. That’s an effective “marks per inch” of four. Now move give that ruler to a friend and have them stand 10 feet away, and look at the ruler between your fingers again. How many 1/4″ marks can you see now? Should be about 48. That’s an effective “marks per inch” of 48.

(Thus the apparent resolution of an image has to do with how far away you are from it. The ruler didn’t change; the one-inch space didn’t change. To see the same resolution (that is 4 marks per inch) your ruler at 10 feet would need marks every three inches, not every 1/4 inch.)

So, when someone specifies a photo as “…your file should be 1000 x 2000 at 300 dpi” they have not got a clue what they are talking about. A digital image file is always specified by it’s dimensions, and if they specify a DPI, it -only- makes sense if they specify the “I” – the inches – as well. “5 x 7 inches @ 300 dpi” is an image that is 1500 x 2100 pixels.

So: armed with that info, let’s look at your question about why a nice, small jpg looks fine on your screen, but terrible when printed out.

A very common size on the web is 320 x 240, which on your monitor will make an image about 3 inches wide and just under 2.5 inches high. Now part of the reason that looks fine is that you’re seeing projected light: it is just like shining a flashlight into your eyes. That obscures/blurs detail. And part is, as you suggested, because your mind fills in a lot. Part can be skillful trickery (read: psychology) by the image creator (sharpening does not add any detail; it just makes you think it’s there.) (Your TV probably has a resolution of about 1/2 of that of your monitor; around 50 ppi, and that’s for HDTV!)

On the other hand, in reflected light an image printed at 100 dpi just doesn’t cut the mustard. You want something at least 150 and likely higher (288 – 360 for a photograph.) You can get that out of your 320 x 240 image, of course… just tell your printer to print it at 300 dpi. And the resulting photo will be about 1 inch wide, and 2/3 of an inch high, and will look plenty detailed… if you like postage-stamp sized photos.

So what if you love that 320 x 240 picture and want to print it out at 7 inches wide? Well, you can if you set the dpi to 45. And it will look pretty terrible… (Unless, of course you look at it from about 10 feet away…)

Or, you can tell your software to “enlarge it” but the simple fact is that no software can make up details from something that is not there in the first place. To print that 320 x 240 at 300 dpi and 7 inches wide, those 320 existing pixels have to magically become 2100 pixels. Where do they extra 1780 pixels come from? Thin air, is the answer. The software can try various tricks (some pretty sophisticated, frankly) but really it only has one original pixel for every 6.5 it’s trying to make up.

The result is an image that looks, er, odd, to say the least. (Unless, as I said, you look at it from across the room.)

In sum then, a one-word answer to your question as to why one looks good and the other doesn’t, then, is: “resolution.”

In fact, if you want to see it in action, take a 640 x 480 photo and put it up on your computer screen, and also on an iPod Touch. You’ll instantly see that the iPod image looks much better, because the screen resolution is so much higher. If you make them each the same physical size (use a ruler) the detail will be lost on your computer monitor because it can resolve only 100 ppi, while the iPod can resolve to 160 ppi.

(Which again proves that aspect of it which is, as you noted, in the venue of the mind: what you get used to seeing. Another way to see, side by side, the difference is to compare a photographic print with a reproduction in a book. The book is likely to be about 150 dpi, while the photo may be as high as 1440 or 2880. You can see the ‘dots’ in the book, but you cannot see them in the photo (with an unaided eye.)

Perhaps that will help clear up the confusion I hear in your statement “…even if it has only been reduced to say 150 or 300 dpi or ppi…” I hope you now see that you’ve not ‘reduced’ the image at all. The image is still the same size it always was, and in fact, if you want it printed at the same size you seen on the screen you will have to use some fancy software to -enlarge- the image (not reduce) so that it can print at 150 or 300 dpi. If you don’t do that, you’re just taking a tiny square pixel and blowing it up into a large square block.

Out of energy… off to spend some time with my wife.

Tracy

On Nov 29, 2009, at 6:26 PM, Bruce wrote:
snip

Your comment below reminds me to ask you, when you get to it (in a few days?), to please clarify an underlying aspect that I have never been clear about. That is the difference between print standards and screen viewing standards. (By the way, let’s continue to discuss this at the not-TOO-technical level we have been using so far here.)

I think what I am asking about is mostly a matter of human visual perception (or if you prefer, of brain integration of visual information). In short, why does an image look just fine to me on a monitor at a “display” resolution reduced down to 72 or 96 dpi or ppi or whatever? Yet even a total amateur can be dissatisfied by printing out the same image, even if it has only been reduced to say 150 or 300 dpi or ppi. Why should the paper printing process be so closely examined by our eyes and brains, while a monitor image can get away with such less information and still be pleasing?

Or am I perhaps even fooling myself with this observation?

But it seems to me that this needs to be addressed before much can be said about the needed ppi, dpi, lpi, or whatever on printing.

resolution, part 4

Part 4, and we’ll begin with Mark’s question. Mark profoundly confuses physical print size with image size. (Which is likely why he asked me to start this missive in the first place…)

As pointed out in Part 3 of this Encyclopedic treatise, a graphic image has a fixed size, say 3000 x 4000. The size on printing that image has to do with how many of those 3000 “dots” are used in a given inch.

If you print that image at 3000 dpi, the printed image will be one inch wide. If you print it at 300 dpi, the print will be 10 inches wide. If you print it at 30 dpi, the print will be 100 inches wide.

The DPI is an OUTPUT specification, and has nothing to do with the image itself.

Let’s go into Photoshop, and make a new document. When you do that, you’ll get a dialog box asking you to specify the size. Make sure the pop up menu says you’re specifying pixels (the list is pixels, inches, mm, cm etc).

Enter 320 for width and 240 for height, and as you do so, look in the lower right of the box. You’ll see the “Image size” change. That’s the size of the file. If you increase the size to (say) 600 wide, the file size will grow larger. Reduce either dimension, and the file size will grow smaller.

Now stop making changes, and look at the file size. Remember it, and go to the “resolution” field. Enter 9000. The file size does not change. Enter 72. The file size does not change.

That’s because the DPI is a specification for how the file will be printed- how many of those pixels of width will be used in a single inch when you print it out.

Now let’s do it the opposite way.

First, put 72 back into the resolution field.

Change the popup menu to inches instead of pixels. (Now you’re specifying the output size – the inches of printed width.)

Put 5 in as width (inches) and 7 as height in inches. If you have color: rgb and 8 bit in the other menus, then your image size will be 531.6K.

Briefly, switch the “inches” back to pixels. You’ll get 360 as the width because 5 x 72 = 360. Switch back to inches again.

Change the resolution to 300 ppi and watch the image size: it will balloon up to 9 Megabytes. Switch the width popup to pixels, and you’ll see that now the file is 1500 pixels wide, because 300 ppi x 5 inches = 1500.

So, at this point, I hope I’ve made clear how the dpi/ppi thing works. A given image file is some absolute, fixed size (whatever that may be, such as 3000 x 4000) and that can be expressed either simply (3000 x 4000) or as output-size/dpi (10″ x 13.3″ at 300 dpi or 5″ x 6.6″ at 600 dpi). Each of those describes the exact same file, just at different output resolutions.

If this isn’t complete clear, the rest of what follows will only be confusing, so I’d suggest you go back and re-read until it is, before continuing.

Your monitor, which is an output device, has a fixed display resolution. You can figure out what it is. First, go to the Displays preference panel and find the width in pixels of your monitor at its native setting (or look it up in the book that came with it. My 23″ Cinema Display is 1920 x 1200, for example.) Next, grab a ruler and simply measure the width of the display area (no plastic, no black, non-image parts… just the illuminated width). Finally divide the resolution from step one by the measured width. That’s your screen’s ppi. Mine is 98.14.

(Actually, if you do this, you’l likely find that the horizontal and vertical resolution are not the same! (Mine is 98.14 H and 96.0 Vertical. That’s apparently a manufacturing necessity. But I’d not worry about it: we’re talking 2/100’s of an inch here, so we’re close enough for govt. work.) I used the 98.14 width.

In photoshop, you can actually enter that amount in its preferences so that when you set an image to 100%, it will really be displaying one pixel per pixel… or darned close, eh?

To test this, set your ppi as determined above into the Photoshop/preferences/units&rulers/screen resolution.

Then make a new document, and use inches as the setting. Try 8 inches wide, with a resolution of your ppi – the same thing you just set in preferences. 98.14 in my case.

When the document appears in the window, make sure you’re viewing at 100% (see lower left corner, or just double-click on the zoom tool) and grab your ruler again. Measure the width of the document, and you see that it is exactly 8″ wide.

OK… now, let’s go back to that 12Mp photo you have; about 3000 x 4000. If you load that into PS (Photoshop) and view it at 100%, you will not be able to see everything in the image because your screen is not 3000 pixels wide and 4000 pixels tall (or larger) which you’d need to see an image that big at a 1:1 pixel ratio (100%).

To see that whole picture, you’ll probably have to view it at about 25% (750 x 1000) which will fit your (say) 1920 x 1200 screen.

What follows is a bit involved, but not really difficult, so bear with me. It’s only a long-winded explanation of the ramifications of what has gone before… with the practical result of ending up with the correct DPI to use when printing for a given target size.

First, however, there is this one important difference between printers and monitors: monitors have a fixed pixel resolution. (You can change the size, but to get a 1:1 ratio, there’s only one fixed resolution on a monitor, as determined by the number of pixels it actually has.)

A printer, on the other hand, can print at various resolutions. You can tell it to use 300 dots of ink for each inch, or you can tell it to use 10. 300 – 360 dpi is common for photo printers. (Epson printers have a resolution up to 2880 dpi, and that confuses people at first. I can go into it if you like, but basically it’s because they can print a variable size dot of ink.)

The point however is that printers can use a variable output resolution, while your file size and screen size are fixed.

Back to the monitor –

Here we go: at 100%, you’r seeing 1 pixel in the image for each one pixel on the screen. If you change the view size to 50%, you’re effectively (not -really-, but effectively) seeing 2 pixels from the image for each single pixel on the screen. (You can’t physically do that, of course, since one screen pixel is just ONE screen pixel, so you’re really viewing every other pixel from the image.)

And at 25%, you’d be looking at every 4th pixel from the image.

That is, at 25%, instead of 3000 pixels wide -on the screen- you’re seeing 750 pixels wide. And since your screen shows 98.14 pixels per inch, then on your monitor screen, that image will be (750 divided by 98.14 or) 7.642 inches wide.

And to print that image at the same exact size, 7.642 inches, you’d set the printer resolution to (3000 divided by 7.642) = 392 dpi. (Or you could work it from the other side and because you’ve reduced to 1/4th, you’d multiply by four, and you’d get the same thing : 98.14 x 4 = 392.)

Don’t hurt your brain: The 3000 divided by output-width-you-want is certainly easier to remember, and much less convoluted.

So, if you have a photo that is 3000 pixels wide, and you want it to print at 10 inches wide, you’d set the dpi to 300, since 3000 div 300 equal 10.

Again, you can easily see this in PS. Open up your image and choose Image/image size…

Notice in the resulting dialog box that PS keeps the actual image size completely separate, in its own box, at the top (Pixel dimensions), from the output, or “document” size, in a separate box. As we’ve seen, that’s because the document size is dependent on the Resolution.

Before we do anything else, however, look near the bottom of that box, for the checkbox that is next to “Resample Image:” and make sure the the box is NOT checked.

Now, to set your output size of your 3000 pixel wide document to 10 inches, we know that we have to set the Resolution to 300. Put 300 in the Resolution box and bingo, the output width changes to 10″.

Photoshop even makes this easier, and saves you the (admittedly simple) calculations. You can just put 10 inches into the Document Size: Width: and it will instantly change the dpi to 300 for you.

In fact, that’s the fastest way to approach it: but MAKE SURE that the Resample Image checkbox is NOT checked when you do this.

Then when you enter a desired size, you’ll see what the effective dpi is that the printer will use.

So, in answer to Mark and Bruce then, here’s how you can tell what will happen to an image when it’s printed out.

Try this: create an image in PS, and set the size to 320 x 240. Next, and visit the image size box, it will show you what the actual size is in pixels in the upper box – Pixel dimensions: 320 x 240.

The lower box, Document size, will show you what happens when you print it out.(But again: MAKE SURE that the Resample Image checkbox is NOT checked when you do this!)

Printers want at least 150 dpi; photo printers want about 300 dpi… so that’s what you’re looking for in the Resolution field. If your image then is 320 x 240, and you set the Document box width to 5 inches, you’ll see that the print resolution is only 64. You can pretty much bet that will look terrible.

If you set the resolution to what the printer wants (say 300) then you’ll see that the print will be just about 1 inch wide. At 150 dpi, the image will be just over 2″ wide.

NOW…

That all said, you -can- check the Resample Image check box. The first thing you’ll notice is that the Pixel Dimensions box on the top becomes active. That’s because you are about to try to change the -actual dimensions- of the file! You are going to try to make something out of nothing! Remember, the actual data, as captured was only 320 x 240.

Put in 300 dpi and a size of 5 inches… and you’ll see that the pixel dimension have changed to 1500 x 1125. If you now click the OK button and go back and look at your image, you’ll see it’s all blurry… and that’s because you told PS to make up information to fill in the blanks in the change from 320 pixels wide to 1500 pixels wide.

And you’re about to print out that nice, blurry image.

And I hope that all helps to explain what is going on with images, monitors and printers.

Tracy