Samsung “The Frame” Displays Photos with Style

David Salahi Uncategorized 2 Comments

Back in the day when I used to shoot 35mm slides I loved the bright, rich colors I could get when projecting photos. The downside to projecting slides though is that it kind of makes an evening into a bit of a production. You have to set up the projector, the screen and a tray of slides and then you turn off the lights and view slides. Friends and family didn’t always appreciate the experience—but I always wanted to display my photos to the best effect.

I often thought it would be nice to find some way of getting that richness and contrast in a print but I was always disappointed by the relatively flat-looking prints I would get from my slides. And the other problem is that you can only display a handful of prints in a typical home. I’d love to be able to change photos on any given day.

Much later, after digital photography had become well established I thought that there must be some way of displaying different photos on a bright screen. Digital projectors now exist and some are quite good—but the good ones are often quite expensive. And, projection still makes viewing slides into a production.

So, over the years I’ve looked around and I’ve found some digital frames but everything I could find was very small and low-res. These days you can find some frames that are a little larger and that will support HD resolution (1920 x 1080). And there are some large (55-inch) screens but they tend to be niche products which require a PC to provide the video source. And these large screens still don’t generally support anything higher than HD resolution. In spite of this progress I hadn’t been able to find a compelling solution. Each product had some disadvantage that made its offering less than persuasive.

Samsung’s “The Frame” TV/Photo Display

Then I stumbled across Samsung’s The Frame TV. This is a 4K TV which also has a mode that is optimized for displaying photos. The Frame has an elegant appearance and the main menu system also looks refined. There’s a second menu system which is more basic; I’ll have more to say about that below.

The most important factor (other than cost) is that photos displayed on The Frame look really good. I could even say they look more “artsy” on The Frame than on my Eizo monitor. Now, my PC monitor is clearly more accurate but the photos on The Frame have something special that gives them a very attractive appearance. One thing I can identify is that there’s some sort of digital texture built in—and I find that I generally like it. Also, the screen has a matte finish which contributes to the illusion of being a print rather than a TV.

Size Matters

There is also something about seeing a photo displayed in a large format that alters your perception of the image. And I would include my The Frame’s 43” diagonal screen as “large.”

I have a 32” 4K monitor connected to my PC and it was a huge step up from the 24” 1920 x 1200 pixel monitor it replaced. But my 43” The Frame is another substantial step up. Seeing my photos on The Frame kicked the Wow! factor up another sizable notch.

When comparing print/display sizes it’s important to remember that it’s the area that must be compared, not the diagonal screen measurements. E.g., my 24” monitor measures about 20” x 12.5” = 250 sq. in. My 32” monitor measures about 28” x 16” = 448 sq. in. The ratio of the diagonal measurements is 32 / 24 = 1.3 but the ratio of the areas is 448 / 250 = 1.8. The latter number is more useful for determining the effect on a viewer.

Similarly, my 43” Samsung The Frame measures about 38” x 22”. The ratio of the diagonals between that and my 32” monitor is 43 / 32 = 1.3 while the ratio of the areas is 836 / 448 = 1.9, nearly double.

Also, important is the UHD resolution (3840 x 2160) that The Frame offers. Seeing all that detail while standing at a comfortable viewing distance is a big improvement over what most digital frames offer.

You may find, however, that the UHD display’s aspect ratio (16 x 9) is not optimal for displaying photos. That 16 x 9 ratio works out to 1.8—a relatively wide, short rectangle. By comparison, 35 mm slides were actually 36 mm x 24 mm so the aspect ratio works out to be 1.5—a significantly taller shape. I’ve been cropping my photos to match The Frame’s aspect ratio but this can be limiting. This constraint can ruin a composition. I suppose I could keep the original crop and let The Frame letter-box it; haven’t tried that yet.

One of the things that separates The Frame from regular TVs is that it has been designed to look like a framed print. That means slimming down the display and removing any controls, LEDS, etc. from the front panel. There is a separate box to contain the TV electronics and just a single slim cable attached to the display. To make the print illusion complete you can run the cable behind your home’s drywall so that nothing is visible to give away the secret. The electronics box (OneConnect) can be located out of sight but you need to be sure the box is accessible after installation so that you can insert a USB drive whenever you want to change photos.

Limitations—Brightness, Photo Orientation

The Frame’s maximum brightness could be a limitation. On an overcast day or early/late in the day The Frame’s brightness is fine but if located in a bright, sunny room the picture might not be bright enough to look attractive.

Another problem this frame has, like any digital frame, is that it must be set up in either landscape or portrait orientation. Once you’ve done that you’ve locked yourself into displaying only about half of your photos. You might, of course, be able to crop a portrait orientation shot for display in landscape format, particularly if you have a high-res camera. But that’s not the point of a properly composed portrait orientation photo.


The Frame is not cheap. I paid about $1000 for my 43” model. But making prints and getting them matted and framed is not cheap either. In addition to print, matte and frame you also have to add a picture frame light which you could end up paying $100 or $200 for.

If you want to make your own prints you end up spending hundreds or thousands of dollars on a high-quality printer and fancy inks and papers. You do get to retain full creative control of the resulting print—but then you also have to develop your craft sufficiently so that you can achieve the result that you envision.

If you send your photos out to be printed you simplify the process, but you lose a lot of creative control. And you may wind up paying for several test prints before you get what you want.

So, from my point of view the expense of The Frame is reasonable given the alternatives.


Each photo automatically gets a matte. If you don’t want mattes, you have to edit the settings for each photo and remove the matte. If you do want a matte you generally are provided with several options for the matte size. But some photos mysteriously have only one choice for the matte type: Shadowbox matte. When that happens that really is your only choice—you cannot even choose to have no matte on that photo. Is The Frame making an artistic choice??

Another choice you don’t have is the order in which photos are displayed. It does what it does and that’s what you get.

The provided remote control is overly sensitive when pressed. Very often when I try to press a single time it registers two presses, thus propelling me into the wrong mode for what I’m trying to do. This is annoying in any case but is especially so while one is still learning how to use The Frame.

You can preview images on a USB drive, but you cannot set them to display as part of your My Collection photos from this preview mode. The photos have to be imported as part of a separate process in a different menu.

There’s a USB drive preview option in both the dedicated remote control and in the SmartThings phone app but they work entirely differently. Curiously, photos previewed on a USB drive display brightly—which may be different than your ArtMode brightness setting.

Limitations—Storage Space for Photos

This is a serious limitation. The Frame claims to provide 480 MB free space for your photos which is not a lot—but it seems you don’t actually get even that much. According to my The Frame I had 332 MB free when I tried to add four JPEGs (total size less than 16 MB) but it said I didn’t have enough free space. So, I removed four photos and it then said I had 360 MB free. I tried copying my four new photos again, but it still said I didn’t have enough room.


Eventually, I ended up deleting more photos from my The Frame and was able to copy the new files I wanted to add. But the limitation to what appears to be only about 100 MB of photos is either a serious design error or a serious bug.

There’s no excuse for the paltry amount of memory provided. Right now, I can buy a 2 GB USB drive for less than $3. An additional $3 in the cost of The Frame would not have deterred me from the purchase.

Automatic Power Off/On Failures

I had some problems with the TV turning itself on during the night (while in ArtMode). Several times so far, I have turned it off at night, but it was on the next morning when I got up. There are no children, animals or other sources of motion/noise in my house. Regardless of any motion, when I turn a device off I expect it to stay off until I turn it back on. Somehow, that basic product requirement does not seem to be implemented correctly.

Similarly, if a product has a motion detector and an auto-off feature I expect it to work consistently and reliably. However, with motion sensitivity set on low and with no one and nothing moving in the room the frame sometimes fails to turn itself off or there is a long delay—much longer than the 15 minutes I specified. (Low sensitivity works better for me than the other options but I still experience the problem described here.)

There is no timer mode. You have to manually turn The Frame on every morning/afternoon and off every night (well, except for when it erroneously turns itself on in the middle of the night).

And when you turn it on it starts by playing the first photo in the list. If you have the slideshow set to display photos for more than about an hour each you’re not going to see very many of them. E.g., if you are changing slides every 6 hours you can only see 4 slides in 24 hours of operation.

There are some settings in a menu called Eco Solution but these settings were disabled in my TV. There’s another setting titled Auto Protection Time but both this and the Eco Solution settings appear to work only when in TV viewing mode. No information is provided about power consumption in any mode.

Useless Trinkets

When previewing photos on a USB drive there’s an option called 360° View which allows you to deform a photo to show a different apparent point of view. It also can show a photo as though wrapped around a cylinder (and/or cone?). It struck me that this could be useful for viewing panoramas but it appears that this feature is only available in preview mode, not normal ArtMode view mode. Given that, one wonders why the feature is included. I couldn’t find any mention of it in the manuals.

In preview mode The Frame automatically adds a slight push in (zoom in) to each photo as it appears. That looks kind of nice. But that, like the 360° mode, is only available in preview mode, not in the real slideshow. So, what’s the point?

Quirky, Non-Intuitive User Interface

The user interface is quirky and not at all intuitive in many places. Menus don’t follow normal conventions; e.g., pressing the joydisk center button on the remote can mean Select or OK—as you would expect—but it can also do what you would expect a Back button to do.

Some functions can only be done on the dedicated remote while others can only be done with the phone app. In some cases, it might be possible to do a thing on either but the dedicated remote is usually more convenient. I’m not sure what good the app is other than to provide Samsung a way to gather more information about me.

One thing Samsung did right—they dedicated the Color button to function as a brightness control when in ArtMode. This makes it easy to quickly adjust the brightness without stopping the slideshow and navigating through the menus.

You have to have the frame on the same Wi-Fi network as your phone. I would have preferred to keep the frame on my guest network as that provides improved security.


Samsung’s The Frame occupies a unique niche in the high-def TV market. The presentation is elegant in terms of both the frame itself as well as its rendering of photos. Unfortunately, the user interface is clumsy and a variety of errors or design flaws make an awkward user interface even more difficult to use. But if you’re looking for a good way to display digital photos in a sophisticated style The Frame may be worth enduring the glitches.

DxO PhotoLab 2 vs. Adobe Bridge for Editing Raw Files

David Salahi Nik Software, Photoshop, Software 5 Comments

I've been a dyed-in-the-wool Photoshop user for years now and have never seriously considered any alternative workflow except Lightroom. I've tried Lightroom a number of times over the years but have always found it lacking (for my purposes). But now a new (to me) app, DxO Photolab 2 has me reconsidering my loyalty to Adobe Bridge & Camera Raw. Because I recently received a free copy of PL2 (along with my upgrade to the latest version of the Nik Collection) and because Bridge hasn’t received any significant updates in some time (none that are useful to me, at least) I decided to give PL2 a try.

Adobe Bridge Issues

At the same time, I’ve had some irksome problems with Bridge, including:

  • It can be slow. Not nearly as slow as Lightroom, mind you, but slower than Photo Lab 2, for sure.
  • Bridge often displays blurry image previews, both in the Preview Window and when previewing full screen. If you click to blow the image up to 100% it then becomes sharp. But I don't always want the image at full screen size. And that’s an extra step that slows my workflow. There may to be a workaround for this; details below.
  • There is no histogram within Bridge—you have to open Adobe Camera Raw (ACR). This slows me down when culling images and checking to see if a portion of the image is blown out or blocked up. And opening Camera Raw implies opening Photoshop which is not a speedy operation even on my speedy PC. Having a histogram directly in Bridge would also be helpful in selecting the best image from an exposure-bracketed series.

There are other major and minor annoyances. A major annoyance is Bridge’s limit of only two zoom levels. A more minor problem is the way that Bridge sometimes defaults to the sRGB color space or to a lower resolution version of your images. This is settable in ACR but you have to keep your eye on it because it occasionally seems to change of its own accord. I’ve also had problems with losing collections and keywords during software upgrades.

Comparison Overview

DxO’s PhotoLab 2 avoids some of these problems while also providing some useful features that are lacking in Bridge/ACR. The most powerful of these additional features are “Local Adjustments.” Local Adjustments is PL2’s name for the Control Points (U Point Technology) which allow you to very quickly create selections/masks which you can then use for selectively applying adjustments.

At the same time, Bridge/ACR, together with Photoshop, provide powerful features that cannot be matched by PL2. And, if I’m going to switch there will also be a cost in terms of adapting my workflow. So, I decided to do an in-depth comparison of the two to help me decide which options would provide the best workflow for my needs.

One of the problems with Bridge/ACR is that these are two different pieces of software. Some functions are done in one app and some in the other. For example, as I previously mentioned, with Bridge you have to launch ACR (which also opens Photoshop) before you can see a histogram. But with PL2 you can see a histogram by quickly switching from PhotoLibrary mode to Customize mode by just clicking a tab:


In editing (Customize) mode the histogram is immediately visible.

Differences and Similarities

Control Points & Local Adjustments

I’d like to continue with an overview of some of the key differences between Bridge and PL2. Most notable is the inclusion of Control Points in PhotoLab 2. If you’ve never seen control points before do yourself a favor and learn about them because it could substantially change your workflow. You should run, not walk, to DxO’s U Point Technology page or to this short YouTube video where you can see them in action.

In 25 words or less: Control Points are the quickest way of creating accurate selections/masks that I’ve seen in any photo editing app. This powerful feature alone is reason enough to consider switching from Bridge to PhotoLab 2.

But let me back up a minute and briefly describe the PL2 user interface so you can better understand how Control Points apply. When you first open PL2 you see a screen that looks a lot like Bridge or Lightroom:


This screen allows you to browse your folders and images in the same way you would do with Bridge. From there, you can switch to the Customize tab to either edit your images directly or open them in another app (such as Nik’s Color Efex Pro).


With PhotoLab 2, just as in Adobe Camera Raw, you can apply non-destructive exposure adjustments globally to entire photos. These adjustments are stored in a sidecar file in the same folder with your photos. In the case of ACR these sidecar files have an .XMP file extension; PL2 uses a .DOP extension. Either way, once you make exposure & color adjustments your edits are saved and the next time you view/edit the photo the same adjustments will be applied. But wait—there’s more!

In ACR if you want to apply localized adjustments such as darkening the sky without affecting the foreground you have to use the Brush, the Graduated Filter or a Radial Filter, etc. These are quite useful but they’re not as fast and accurate for most purposes as Control Points.

In PL2 you can start by doing global (entire image) adjustments and then you can fine-tune your image using one or more Control Points. This provides the ability to very quickly adjust photos without any brushing, building of lassos, or otherwise manually creating selections. Control Points are also, in my experience, quicker and more accurate than Photoshop’s Quick Select tool for most purposes.

I was disappointed, however, by the inability to temporarily turn off individual Control Points. When editing it’s common to switch back and forth between the original and edited versions of a photo. This allows you to better see how well your edits are working. PL2 does have this ability for the whole photo, as does ACR, but PL2 doesn’t provide an option to turn off individual Control Points to see exactly what changes a given CP is making. The Nik Collection apps like Color Efex Pro and Silver Efex Pro do provide this ability and I find it quite helpful for seeing exactly what effect each CP is having.

Of course, there are lots of situations where a quick automatic tool just won’t be adequate. In those cases, you can still fall back on Photoshop with all its powerful selection and masking features.

Tagging & Rating

One of the key features of a digital asset management (DAM) program is the ability to quickly and easily rate photos and tag them for further editing. Adobe Bridge provides three mechanisms for this purpose: labels, star ratings and keywords. PL2 also has labels, called tags or selection markers, but it offers only two options for labels vs. Bridge’s five.  Both Bridge and PL2 offer from zero to five stars.

Of course, both apps have the ability to filter images based on criteria that you specify although I have to say I found the feature hard to find in PL2. Bridge provides both a menu option and a filter icon in the toolbar. PL2 provides only the filter icon and it’s not in the top toolbar like Bridge. Instead it’s in another toolbar which is located closer to the middle of the screen. By carefully scanning the entire PL2 UI I eventually found it there. While I was still searching for it I checked the user guide but didn’t find any information about how to filter in the “Sorting & selecting the best images” section. In fact, I found the user guide to be very skimpy overall. Too often, the explanations were terse and minimal. Furthermore, the online user guide is essentially unformatted so there’s no control over the page layout like you see in the PDF version of the Bridge manual.


Getting back to filtering… PL2 works in the opposite way of Bridge. In Bridge you show a selection of images based on the attributes of the images you want to see. With PL2 you hide images based on the attributes you don’t want to see. This latter approach feels awkward to me. Maybe it’s just because it’s different from the way I’m used to working but it does seem to require extra mouse clicks. In Bridge I can quickly choose “Show Labeled Items Only.” Later, if I want to see everything again I can clear the filter. But with PL2 I have to turn off both untagged and rejected images in order to see only the selected photos. Then, to see all images again I have to turn back on both untagged and rejected images. Same issue for star ratings.


Both apps also have the ability to sort a folder’s contents based on various image attributes. Here are the fields available in PL2:


Here’s the list of options for Bridge; these seem more useful to me:


Don’t ask me what Processing Status is. I haven’t been able to figure it out yet.

Other Tagging/Rating/Sorting Issues

One other Bridge feature that I regularly use is the Stacking feature. I often take three or more bracketed exposures which can be combined easily into a single stack in Bridge. The stack can be expanded or collapsed depending on your needs at the moment. PL2 doesn’t have stacks.

Another feature that is absent from PL2 is Keywords. Bridge has the ability to create and assign keywords and sub-keywords to images. And, of course, Bridge is able to search for specified keywords. In fact, this search feature is quite robust with the ability to search based on much more than just keywords:


And there is a full set of Boolean operators:


After you get your search results you can then apply a filter (e.g., 3 or more stars) to further refine your results. Finally, you can save your results as a Smart Collection.

Zooming and Panning

I need to back up a step now and look at some basic functionality which should be a no-brainer requirement for a digital asset management (DAM) offering: sharp zooming and panning. You might be surprised to hear that lots of people have problems with getting blurry images in Adobe Bridge in preview mode.

To review: if you click on a thumbnail and then press space the selected image is blown up to fill the screen (or to 100% of actual image size, whichever comes first). Unfortunately, in my experience that preview image is often blurry. Before you tell me to get new eyeglasses take a look at these results from a Google Search. You’ll notice that, not only do lots of other people have this problem but this is a longstanding issue. At the time of my search (July 15, 2019) I see results going all the way back to 2010.

I’ve found a partial workaround by clicking on the preview image which is then blown up to 100%—and then the image is sharp. But for my typical 23 MB photo file that means I’m seeing about 20% of the image on my HD screen. I usually don’t want to start out looking at a photo zoomed all the way in. OK, if I click again it zooms back out to fit the image on my monitor and then it’s sharp. But that’s two clicks more than should be necessary as well as a waste of 2 seconds for something that should be immediate. And in PhotoLab 2 it is! PL2 has none of these problems with blurriness in preview mode.

But wait—there’s more (or maybe less, depending on your point of view). In Bridge there are only three image viewing sizes: thumbnail (which can be as large or small as your screen allows), fit-on-screen preview and 100%-zoomed in preview. When I think about this it seems ridiculous that that’s all you get from Bridge. In 2019.

PhotoLab 2 to the rescue again! PL2 obligingly allows you to zoom to any arbitrary zoom level. Click once to select an image. Roll the mouse wheel to zoom in or out in roughly 2% increments. Click and drag to pan a zoomed-in image. It seems so obvious that this is the way a DAM should work that I feel embarrassed for Adobe that Bridge doesn’t attain this base level of functionality. How would Bridge answer this accusation? Open the image in Camera Raw. Then you can zoom in and out to your heart’s content. If you don’t like that there are workarounds circulating on the net that tell you to clear Bridge’s preview cache. I’ve tried this a few times and have had some provisional success. But even if it works I then need to rebuild the cache in order to have good viewing performance. And how long will it be before I need to clear the cache again? Sigh.

These two issues have been a thorn in my side for years and a substantial part of my motivation to evaluate PhotoLab 2 as an alternative to Bridge.

Workflow Issues

Contrary to my statement below, DxO software user John M pointed out that it is possible to have a non-destructive workflow using DxO's PhotoLab 2 and a Nik filter like Color Efex Pro. In my opinion this workflow, though possible, is less convenient than a non-destructive workflow using Adobe Bride/Camera Raw and Photoshop.

The first problem with the DxO workflow is that it requires a manual step of exporting the CEP settings. This could get tedious when making multiple changes to a filter's settings. Another problem is that PL2 settings could change between the first time an image is edited and a subsequent edit. This could cause the photo to change in an unexpected way. And a third problem I see is that if you want multiple Nik (or other software) filters the workflow becomes even more manual and is too unwieldy. (E.g., if you want both CEP and Dfine.)

John M and I had a lengthy discussion on the DxO forum. You can follow our discussion there:

Next, I’d like to discuss the overall process of taking an image from first look all the way through basic editing. For Adobe Bridge/Camera Raw/Photoshop here’s the procedure:

  1. Look at thumbnails in Bridge; use the two zoom modes described above to zoom in for a closer look.
  2. If the photo looks promising open in Adobe Camera Raw. Apply basic corrections in ACR. How far you take this is optional. Usually, I’ll just do some really basic adjustments: exposure, color correction, lens distortion removal and a touch of clarity.
  3. If I need more than steps 1 & 2 I’ll usually take the image into Photoshop next.

This is just my typical process. Other people do a lot more work in ACR. That’s fine. I just usually feel boxed in by the limitations of ACR so I figure I might as well go all the way into Photoshop where the sky is the limit. Unless it’s blown out 😁.

Step 1 doesn’t make any adjustments and any step 2 adjustments are non-destructive so that’s great. Within Photoshop you do, of course, have the option of either a destructive or a non-destructive workflow. I tend to lean toward non-destructive for serious work; for quick snapshots I don’t mind a bit of destructive editing. (And I can always create a backup copy of a layer.) This fully non-destructive option is where Bridge/ACR/PS has an advantage in my opinion.

Here’s the default workflow for PhotoLab 2 with the Nik Collection:

  1. Same as 1 above: preview and select (except that here you have a full range of zoom in/out options).
  2. Analogous to 2 above: Make non-destructive edits using PL2. Now, at this step PL2 has a serious advantage with its Control Point feature. ACR’s local editing tools are anemic by comparison.
  3. But I have a problem is when I get to this stage. Suppose you now want to apply filters in Nik Color Efex Pro. PL2 has a nice button that will take you there. But there’s a detour! First, PL2 exports its edited raw file to a TIFF file—which bakes in the PL2 edits so that they are now destructive (i.e., no longer editable). Similarly, any changes applied in any of the Nik Collection tools get baked in immediately.

So, with the DxO approach only the PL2 edits are non-destructive and then only if you don’t apply any further effects. But with ACR/PS you have the option of a completely non-destructive workflow all the way through. Here’s that procedure:

  1. Same
  2. After making changes in ACR open as a SmartObject1. Now, from within Photoshop you can close the file and then reopen it; then, reopen the SmartObject and make any further edits in ACR non-destructively.
  3. Next, the Pièce De Résistance: open the file again and then edit in Color Efex Pro (or other Nik filter). Add one or more CEP filter layers, as desired. Save & Close.
  4. Reopen: your original ACR adjustments are still preserved as editable. Tweak these as desired. Open again in Color Efex Pro and edit as you wish. Tweak your existing settings or replace filters with new ones. All done non-destructively. And there are other advantages: you can duplicate the original layer before editing in Nik; you can add Photoshop notes to remind yourself or others about key points of the processing; and more.

DxO can’t compete on this playing field. What about PL2’s powerful Control Points which allow for quick localized non-destructive editing? ACR & Photoshop don’t have those. But guess what? Viveza has control points, too and a lot of the same adjustments as PL2. Plus, Viveza can be applied as a SmartFilter just like Color Efex Pro. Viveza doesn’t do everything that PL2 can but it accomplishes a lot of the most common tasks.


These issues I’ve discussed above are the most important ones for the way I edit photos. So, for my purposes, I’m leaning heavily toward the full non-destructive workflow described in the four-step procedure above. I may do a bit more testing.

And there’s a lot of other features that may distinguish one product or the other. I don’t have the time to cover them here but here’s a short list in case any of these matter to you:

  • PL2 has presets that might save you time or provide a different look that you might not have considered. And you can create your own presets.
  • PL2 has Projects which sound similar to Bridge’s Collections. But Bridge also has Smart Collections.
  • If you’re thinking about using both Bridge and PL2 be aware that PL2’s DOP sidecar files show up in Bridge thumbnails as an unrecognized file type. If you do a lot of editing in PL2 and then look at your folder in Bridge it will be littered with DOP icons. You can filter those out in Bridge but then you have to do that every time you go to a different folder.
  • Bridge has a bunch of built-in workspaces with the option for you to customize these and save them as new workspaces.
  • Image files in PL2 can have a variety of Processing Statuses. I don’t know what the implications of these are. Does time have to be spent processing files? Processing is discussed here but the meaning is not elucidated.
  • PL2 has a search feature. I don’t know how it works. Again, the user manual is not very helpful.
  • PL2 does not show the last modified date for files. Seems like an odd omission. Maybe I’m just missing it.

1 If you open a raw file from Bridge and then open it as a SmartObject it does indeed come into Photoshop as an SO with the ability to make further adjustments to the ACR settings. But the icon in the Layers palette doesn't say Camera Raw like it does if you open it from Bridge with no initial adjustments and then create an SO followed by ACR adjustments. It just shows as an undifferentiated SO. Just a small difference—you have to remember that there might be an ACR adjustment lurking in there.

Building a Video Editing PC

David Salahi Premiere Pro, Video, Video Editing 2 Comments

I was editing two-camera footage of a two-hour play and the work was grinding slowly forward. The problem was not the editing itself but the wait for Premiere Pro to update the view as I scrubbed back and forth in the timeline. I had already abandoned any idea of just playing the footage and marking specific points to cut. That was impossibly slow and the audio and video were never in sync. But even the process of scrubbing between two points along the timeline was excruciatingly slow.

I had already done all the standard things to optimize performance including:

  • Using hardware rendering
  • Using an SSD for my Windows disk
  • Using a separate SSD for my footage
  • Using yet another SSD for Premiere Pro’s scratch drive

I had a decent video card (nVidia Quadro K2200) and processor (i7-4790 4 GHz) and lots of memory (32 GB) but my four-year-old system just wasn’t up to the task of multicam editing of 4K footage. (It worked pretty well with single-cam 4K footage.)


Researching Video Cards

So, I decided to build a new PC but I needed to do some research first. I started by trying to determine which graphics card I should buy. I started with Tom’s Hardware as I always do when I want to get updated on the latest PC technology & products. Then, I checked out the Adobe Premiere Pro CC system requirements page but the only information was a long list of approved cards. The prices for the cards ranges from hundreds of dollars to thousands of dollars. How could I know which card(s) would be at the sweet spot for my needs & budget?

Next, I browsed the Adobe forums and that led me to Puget Systems, a custom PC builder specializing in graphics & video editing hardware. I considered buying a fully configured & assembled PC from them but I was afraid I’d end up busting my budget. Their Hardware Recommendations page did have some valuable information about choosing a graphics card. In particular, this statement caught my attention:

“Premiere Pro works great with a Quadro card, but for most users a GeForce card is the better option. Not only are GeForce cards much more affordable, they are able to match or beat the performance of the Quadro cards.”

I continued searching for info and found an excellent article, How to Build the Best PC for Video Editing, on the Logical Increments website. This article provides a list of sample PC builds ranging from a budget PC ($700) on up to a “God-Tier Video Editing PC ($5,000+).”

Building a PC Parts List

I decided to dig into the parts list for the Logical Increments “Video Editing Supercomputer ($2,600).” However, I found that some of the specified parts were not available so I started to research alternatives. Another thing I learned is that cryptocurrency mining is affecting the price and availability of certain PC components. Four years ago I paid only $340 for my Intel Core i7 processor but now I was looking at $900 for the Logical Increments-recommended i7-7820X! (Note: Logical Increments has updated their system configurations since I initially consulted their site. The parts list now specifies an i9-9900K; $550 as of 12/3/18. Sadly, I visited their site about a week too soon and ended up paying hundreds of dollars more for my processor. I might have returned the CPU to Amazon but newegg’s policy prevented returning the motherboard. On the flip side I hear PCs & components will be increasing soon if the China tariffs go into effect.)

Graphics Card

Getting back to the graphics card question I noted this statement on Logical Increments’ site:

“Unlike other video editing software options which rely primarily on the CPU, DaVinci Resolve is driven almost entirely by the GPU.”

However, the other software options, mentioned in the preceding paragraphs, include my choice of NLE: Adobe Premiere Pro. Elsewhere on the page it says about Premiere: “in real-world situations, the performance difference between a moderate GPU and a powerful GPU isn't very significant.” So, since I was paying through the nose for the CPU I decided to take their advice about getting a good “moderate GPU.” Based on pricing and availability of similar models I chose the EVGA GeForce GTX 1070 FTW GAMING ACX 3.0. It had excellent reviews, 5 eggs (on newegg), based on 351 reviews.

Motherboard & Memory

The motherboard recommended by Logical Increments at the time has also changed since I placed my order but based on various other recommendations I chose the ASUS ROG STRIX X299-E GAMING LGA2066 DDR4 M.2 USB 3.1. This particular mobo had only a 4-egg rating but Asus has a good reputation in general and I had previously used an Asus mobo that I was happy with. This one has two M.2 slots and eight DRAM slots.

I’ve never been an overclocker. For most of my work and because I’m not a gamer I don’t feel it would be worth the trouble. So, I opted for two pairs of CORSAIR Vengeance LPX 32GB (4 x 8GB) 288-Pin DDR4 SDRAM DDR4 2666 (which are on the Asus QVL). With memory slots to spare I chose four 8GB sticks instead of a larger memory kit thinking that this way I would get the benefits of quad-channel memory access this way. (Hopefully, I’m right about that but I don’t think it will cost me anything if I’m not.)

SSDs & Hard Disks

For a while now I’ve been watching the development of the M.2 interface and I’ve come to the conclusion that this is a good time for me to adopt the technology. Slots are now common on motherboards and M.2 SSDs are reasonably priced. The 500 GB version of the SAMSUNG 970 EVO M.2 2280 250GB was specified in the Logical Increments system. At $78 that’s not much more than SATA SSDs. The 250 GB SSD will be my Windows boot drive and I also have a 500 GB version as my video footage drive.

be quiet! Dark Base 900 Case and Straight Power 11 Power Supply

The be quiet! Dark Base 900 case comes with header cables already routed nicely and plugged into the front panel. Making these connections with their tiny connectors in the corner of the case can often be a study in frustration. This way, some of that frustration is avoided.

On the other hand, the 900 is not designed for the power supply unit to be able to mount so that the 110 VAC connector can be exposed through a hole in the case (the usual way). Instead, the mounting bracket holds the PSU a couple of inches inside the case and the case has a 110 VAC pigtail to connect from the power cord socket to the PSU. This also means that the PSU’s on-off rocker switch cannot be switched from outside the case. But be quiet! has an answer for this, too. There’s a separate rocker switch already mounted to the case which is accessible from its exterior. Then, the two poles of the switch are connected to the supplied internal power cord.This assembly is wired in such a way that the switch controls power to the cable.

This unconventional mounting approach also means that the back of the PSU doesn’t draw in cool air from directly from outside the case. The PSU’s metal grille is entirely inside the case so it’s drawing in air that’s already (just) inside the case.

Wrong Item Shipped? Or Attempt to Pass Off EVGA PSU as Straight Power?

I ordered a be quiet! Straight Power 11 850W power supply unit but I was confused when I opened the box. The outside of the box said Straight Power. But the PSU itself said EVGA Supernova 850G2. There was no mention of be quiet! or Straight Power on the unit. I contacted be quiet! and quickly concluded that the unit I had received was not the correct PSU. I contacted Amazon and returned the unit and then ordered another Straight Power 11 from newegg. When the correct unit came I noticed how much smaller and lighter the genuine Straight Power was than the EVGA PSU.

Figuring out how to mount the Straight Power 11 into the case was a puzzle that took me about an hour to solve. My first solution ended up with the PSU’s fan blowing air into the central part of the case. I didn’t notice that until I had finished tightening everything down. Realizing my mistake I unscrewed the six affected screws, reversed the PSU orientation and screwed it all back together. Now, the fan immediately exhausts the PSU’s heat directly out the bottom of the enclosure.

M.2 Installation (& Confusion)


This motherboard provides two NVMe M.2 SSDs although at one point I was confused about the number of sockets. The mobo manual points out a heat sink arrangement for one SSD. To mount the drive you unscrew two screws and remove a relatively large piece of metal. You lay the SSD onto the mount with the gold connection points in the socket. Then, you replace the top piece of metal and tighten the screws.


Initially, I had assumed there would be two M.2 sockets under that largeish piece of metal. But after installing the first I didn’t see a second one. Upon reading the manual more carefully I discovered a second mount of an entirely different type elsewhere on the board. With the second one you plug the M.2 card vertically into the socket. And there is no heat sink. The motherboard kit does include a piece of metal that functions as a brace to offer a little protection for the card and also keep the M.2 card firmly inserted into the socket.

Installation in the first socket was easy but the second one was tricky. It’s one of those tasks where getting everything to fit properly all at once requires a good bit of dexterity. Eventually, I got it to all fit together but then was worried by the complete lack of a heatsink for the second card. If the first drive gets such a hefty heatsink doesn’t the second one need at least some kind of heatsink? After all, this 500 GB drive was going to be my main footage drive while editing and rendering. Tons of data would be regularly flowing on and off the drive.

With some trepidation I started formatting the drive while periodically touching it to see how hot it was getting. Halfway through I balked and canceled the format. After searching online for info on M.2 drive mounting I decided that the drive was properly mounted and that it should be OK. Happily, the socket is right next to the large CPU cooler so it should get good airflow. Once again, I started a format going on the drive and this time I allowed it to complete. It went fine; no smoke issued from the drive. Since then I’ve continued to use the drive and haven’t experienced any problems.



The Noctua CPU cooler is massive which should mean that it provides great cooling. It also means that it takes up a lot of space on your motherboard. The photo below shows the included mounting brackets in position (incorrectly—the one on the right needs to be flipped over so that the center of the curve is away from the processor socket). Other brackets are included for those using LGA 1066 or AMD sockets. With the brackets attached properly it was not too hard to screw on the cooler. The hardest part was getting the screws started. Noctua provides a rudimentary handle-less screwdriver which, I assumed, I should use. So, I started by trying to use it but found that I just couldn’t get the spring-loaded screws started. Trying to use the included screwdriver was an exercise in frustration. When I switched to a regular screwdriver it worked pretty easily because I could press down with the palm of my hand while turning to get the screw started. (Maybe that right-angle screwdriver thing was intended for use with one of the other socket types.)


Two orientations, one at a right angle to the other, are possible in mounting the heatsink assembly. This allows flexibility depending on adjacent components. In the case of the Asus ROG STRIX X299-Egaming mobo only one orientation would work. The other was blocked by the “ROG” bling thing (pretty colors!) on the mobo. Fortunately, the cooler cleared my DIMMs.


Next, it was time to attach the fan and then connect it to the power header on the motherboard. The fan’s wire was too short so I was forced to use the Noctua low-noise cable extension. I worried a little that “low-noise” might mean poor cooling performance but I didn’t have any other alternative at hand. So, I used the low-noise extension connector and so far the system has been stable.


The Noctua cooler actually comes with two fans: one which is to be installed between the two pairs of fins and the other, optionally, on the side. I don’t think I have enough room for the second fan. Since I’m not overclocking I decided to go with just the one fan. Again, so far, so good.

I have to say I much prefer the type of mounting bracket provided by Noctua over the plastic press-and-turn attachments that some CPU fans use. I always found it difficult to get these seated completely. And then, if you ever need to remove the fan for any reason—good luck removing and reinstalling the fan without destroying the plastic pieces.

Installing the EVGA GEFORCE GTX 1070 Graphics Card

This is by far the largest graphics card I’ve ever used. The next photo shows it in place along with the remaining four PCI-E slots. With the large cooler and large graphics card the top slot is nearly unusable. A card of any size would block the airflow coming out of the cooler. The next slots are 4- and 8- lane slots. Those would be usable if you have a card that could fit those slots. The bottom (5th) slot is barely visible in this photo.


You might be able to get a card in that bottom slot except for the fact that it would completely block the airflow from the twin GTX 1070 fans. So, with this graphics card and this cooler you really only have a single usable slot. (The graphics card wouldn’t fit in the other 24-lane slot at all.) Fortunately, everything else I need is on the mobo.

Getting the Case Fans Running

The be quiet! Dark Base 900 enclosure includes several fans built into the case but initially they don’t operate. Obviously, they need power but I had been thinking that maybe the case was designed to provide its own fan power. I had noticed that that unusual power supply mounting arrangement includes a tap into the 110 VAC wiring as it enters the case. It turns out the case doesn’t get any power there. But the solution is simple. The case includes a SATA power connector that must be plugged into the power supply. I had seen the connector leading out from one of the wiring harnesses but, at first, I didn’t understand its purpose. A quick email to be quiet! got me the answer and then my fans were spinning.


On the Case

Overall, the Dark Base 900 is a very nice case. Together, the three built-in case fans plus the CPU cooler plus the two fans on the graphics card were all very quiet even with the case open. With the case closed it’s almost silent. The external hard drives on my desktop make more noise than this new tower system.

I like the air filters which are part of the case. They slide out pretty easily and look like they should go a long way toward keeping the inside of the enclosure clean. My last PC, built in a Cooler Master case, took the opposite approach‐no filters but tons of ventilation. As a result, the airflow was pretty decent even when the case and the inside of the box were covered with dust. But cleaning those CPU and graphic card fans is always difficult and I think I prefer air filters that keep most of the dust out in the first place.

Getting all the panels (bottom, left & right) back on was a bit of a challenge. Screwing in the two screws that secure the bottom platform to the bottom of the case was difficult. With the case full of electronics getting those screws back in was a lot harder than taking them out.

And then I couldn’t get the right side panel on while the unit was sitting upright. But after laying the case on its left side the right side panel slipped right into place.

The 900 uses thumbscrews for various purposes: securing cards in their slots, securing the side panels and providing access to the various drive bays. I like the idea of thumbscrews but these screws do not turn easily. It’s impossible to tighten them by hand. I had to use a screwdriver in every case. Also, the drive bays did not pop right out when the screws were loosened. I had to fiddle with them a bit. It seems like the manufacturing tolerance was a little off.

Still, I like the case a lot at this point. It’s attractive—the best-looking case I’ve ever built—and did I mention that it’s quiet? I do miss the casters that my Cooler Master case had. The fully built out PC is heavy. Trying to maneuver it into place under the desk requires some muscle. I’m at an age when I’m wondering if I’ll have the strength and flexibility to continue building PCs this size in years to come. For those who haven’t built a PC it’s surprisingly physical—getting down on hands & knees, getting up and down repeatedly, spending hours bent over the case. And then, finally maneuvering it into its place under the desk.

The Acid Test

If you remember how I started this review you may remember my frustration at trying to edit two-camera 4K footage on my old PC. Playback of a multi-cam sequence was frustratingly slow. Once I had the new PC up & running I could tell it was going to be fast. But I wasn’t yet sure exactly how fast. When I eventually got Premiere Pro installed and copied my footage onto the Samsung M.2 SSD I was ready to find out just how fast. It’s great! It plays both cameras in full resolution very smoothly. To get a better sense of how the system was performing I opened up Windows’ Resource Monitor and got a look at all 16 CPUs. They were pretty busy but happily it looks like there might even be some processing power to spare. Of course, that also depends on the disk throughput. As before, I have three separate SSDs set up for footage, scratch and rendering output. I’ll have to dig into the Resource Monitor further as time permits. For now, life is good.


Sleep Tight

An unexpected bonus is that sleep mode was enabled by default and is actually working. I’ve been trying ever since Windows NT was first released to find a way to make sleep/hibernate work reliably. But it has rarely worked for very long. Right now I’ve only had this new system running for about a week so things could change with the addition of new software or hardware or a Windows update or gremlins or …. But for now it’s sleeping tight—and waking up refreshed!

Editing Multi-Camera Footage with Adobe Premiere Pro CC 2017

David Salahi Premiere Pro, Video, Video Editing 3 Comments

I did a couple of multi-camera shoots recently which required me to devise a more complex editing workflow than most of the shoots I’ve previously worked. After a couple of false starts I came up with a process that worked out quite well. In this post I’d like to share what I learned. There may be better approaches (perhaps, using nested sequences?) but for my needs the procedure described below was flexible and reasonably efficient.

I’m going to focus on a shoot I did of a two-day conference (Citizens’ Climate Lobby Southern California Regional Conference) in which three cameras were used for the plenary sessions. One camera (a Panasonic GH4) was focused exclusively on the speaker; another camera (GH4) was fixed on a large projection screen showing the speaker’s slide deck and the third camera (Sony PXWZ150) alternated between the speaker and shots of the audience. I used a separate audio recorder which was connected to a feed from the room’s sound system.

I learned that there’s a lot more setup work involved in creating a multi-camera editing project than for a single-camera shoot. It’s also important to set things up appropriately for your situation so that the rest of the editing process will work smoothly.

Preparations for Creating a Multi-Camera Source Sequence

One of the key steps in editing multi-cam footage in Premiere Pro is creating a Multi-Camera Source Sequence (MCSS). This synchronizes the footage from all the cameras (and the audio) and allows you to see the footage in Premiere’s Multi-Camera view. This makes it easy to select different camera angles while playing back the footage.

I had thought that my first step after importing the footage would be to create a multi-cam source sequence. But when I did that I found some problems in synchronizing. This was because the Panasonic GH4 breaks its footage into a series of six-minute files. It will record continuously for as long as you like (until your SD card is filled). But the footage is broken into a series of six-minute files.

I use Premiere Pro’s audio synchronization method so when syncing multiple cameras what actually happens is that Premiere is synchronizing the first file from each of my GH4s. But in a one-hour talk there will be ten footage files and I found that only the first one was synchronized properly. The way this works out is dependent on several factors but I determined that the simple solution for my situation was to create a source sequence for each of my two GH4s. Creating a single source sequence for each camera also simplifies editing later as it avoids the need to make edits that span two files. (The edits can span two physical files but the process is transparent to the editor.)

To do this I simply select the set of GH4 files for one camera, right-click and choose New Sequence from Clip. This creates a regular sequence of the appropriate resolution and frame rate for the footage. I created separate camera source sequences for each of the GH4s.

One thing to watch out for when creating a camera source sequence is to make sure the files are sorted by filename (in the Project window) before selecting them to create the sequence. If some other column is selected the files will be added in that sort order, meaning that they will most likely be out of order in the resulting sequence.

A camera source sequence as I’m using the term is simply a sequence of consecutive footage files combined so that the entire sequence can be treated as a unit.

Creating a Multi-Camera Source Sequence

With the camera source sequences created I then create a Multi-Camera Source Sequence by first selecting each camera sequence & the audio file (recorded separately on a Tascam recorder). Then, with all camera sequences and the audio file selected I right-click and choose Create Multi-Camera Source Sequence from the popup menu. Finally, I create a new output sequence with my desired output resolution and then drag the MCSS onto the timeline.

Ensuring a Smooth Workflow

With a multi-camera workflow there are some options in terms of where you do things like scaling, color correction and audio editing. On this shoot I recorded on GH4s in 4K but was targeting output at 720p. This gave me the flexibility to punch in in post without losing any resolution. Of course, I then had to scale the footage down to fit the final output sequence resolution. So, the question was which should I do?

  1. Set the scale on each physical footage file within the camera source sequences or
  2. Set the scale on my camera source sequence or
  3. Set the scale in the final output sequence?

Option 1 requires extra work because I have multiple physical files for each camera. So, I used a combination of 2 & 3. I first set a scale factor of 0.33 to make the 4K footage fit properly on my 720p final output sequence. This allows me to see the entire source frame so I know what I’ve got when I’m choosing camera angles. Then, to take advantage of the ability to zoom in in post I can adjust the scale factor of individual clips on the output timeline.

Editing Inside the Multi-Camera Source Sequence

Now, to set the scale factor on my entire source camera sequence I need to do that within the multi-camera source sequence. However, you cannot directly open a multi-camera source sequence from the Project window. If you double-click on a MCSS in the Project window (or in the timeline of a final edit sequence) it will open the MCSS in the Source window. From there you can see each of the camera views but you’re limited in what you can do. You can’t set the scale of the individual source sequences, for example.

The fix for that is to open the MCSS by control-double-clicking on it (in Windows) in the final output timeline. That will open it into a view in the Timeline window showing each source sequence (each camera angle) as a separate layer in the timeline:

You can then select each layer so you can scale or otherwise adjust it as needed. Of course, you’ll have to temporarily toggle off the visibility of higher layers in order to see the effects of your changes on lower layers. And in this example, you can see that I’ve muted all the scratch audio (camera audio) and left only Track 1, the Tascam recorder sound, enabled.

I also thought about doing color correction in the multi-cam source sequence figuring that I could just adjust each camera sequence all at once to correct/match the other cameras. However, the room where the plenary sessions took place had some relatively large windows. This meant that the color temperature changed throughout the day so this approach would not work well. Instead, I decided to color correct each clip in the final output sequence to get consistent color from shot to shot.

The other thing I tend to do is adjust the audio before chopping the final output sequence into camera angle clips. Unlike my case with color correction it worked well for me to make all the audio adjustments once.

This process took care of all the setup I needed. The rest of the work is pretty straightforward as you edit the multi-cam source sequence to choose the different camera angles at the appropriate times. This results in a final output sequence like this one with cuts between cameras.

Pilotfly Stabilizer, A Cautionary Tale

David Salahi Gear 2 Comments

I’ve had some bad experiences with buying stabilizers directly from manufacturers in China. In a previous post (see sidebar) I commented on the problems I had trying to order a stabilizer from CAME. I eventually canceled that order and purchased a Pilotfly H1+. After receiving that unit I posted about the vagaries of balancing it and in another post I discussed the problems I had when the unit’s parameters got scrambled.

Now, after just eight months of very light use my H1+ has failed completely—it won’t power on. I chatted with Pilotfly on their Facebook page and they recommended that I return it for repair. To Taiwan. And, of course, there’s no warranty.

Read More

iPad error message

Android or iOS?

David Salahi Gear 1 Comment

Over the years, I’ve owned several Apple products starting with the Apple II. But I’m not an Apple fanboy. I believe in using the right tool for the job. Later, I needed to run CP/M so I bought a Franklin and have since had a number of Windows PCs. In 2004 I decided that the iPod was the right music player for me and I eventually had three of them over the years.

My first smartphone was the original Droid and when I bought my first tablet I went with the Motorola XOOM, both Android devices. But when I was ready for a new tablet I got an iPad 3 and I later bought an iPad Air. I’ve been pretty happy with both iPads (with some misgivings; see below)—until yesterday.

Trouble in Apple Paradise

Read More

X Theme for WordPress is a Powerhouse

David Salahi Website Design 5 Comments

I was recently introduced to the versatile X theme for WordPress by another web designer and was immediately impressed by its power, attractiveness and flexibility. The theme makes it easy to create the sort of open, graphically rich websites that are so popular today. At the same time, it plays well on everything from small smartphone screens to large desktop screens. And it’s highly customizable so tweakers like me can get down and dirty with the settings and the code to get things looking just the way we like. I’ve been so impressed by all the functionality and flexibility that I decided to convert both this blog and my business website to X.

Website built with the X theme for WordPress

The X theme comes with a slew of plugins including a visual drag-and-drop page builder that allows anyone from the casual designer to the hard-core coder to quickly create attractive sites.
Read More

Panasonic GH4 Special Microphone Needs Special Extension Cable

David Salahi Gear 3 Comments

I wanted to use my Panasonic GH4 microphone, the DMW-MS2, as a boom mike so I did a little test recently, attaching it to a boom on a light stand. This mike is designed to be mounted on the GH4 hot shoe and I’ve used it that way numerous times. But I had a situation calling for a boom mike so I thought I’d try it out that way.

The microphone itself has a cable that’s only 8” long and the cable terminates in a 1/8" plug. Obviously, I would need an extension cable for the boom situation so I got one out and plugged it in. I was surprised to find that the microphone didn’t work properly. The reason is that the microphone has a non-standard plug which communicates some additional information between the mike and the camera. This allows you to configure the mike dynamically to any one of four modes: shotgun, super shotgun, stereo, or lens angle tracking. Those are nice features but they come with a cost. I wrote about a related problem previously. In that post I explained that you can’t connect the DMW-MS2 to an analog preamp.

So, I was aware of the special nature of the microphone but in this case I figured it would work because I wasn’t connecting to a preamp. Instead, I’d be connecting (almost) directly to the GH4. There would just be an extension cable in between. Wrong.

Normal microphone with regular 1/8" stereo plug

Normal microphone with regular 1/8″ stereo plug

Panasonc microphone with special 1/8" stereo plug; note the extra ring

Panasonc microphone with special 1/8″ stereo plug; note the extra ring

Read More

Matching Color Temperature Using the Panasonic GH4

David Salahi Gear, Tips & Techniques 1 Comment

Aputure Light Storm LS 1c bicolor LED light panel

Aputure Light Storm LS 1c bicolor LED light panel; you can see the two different LED colors in alternating rows

Aputure Light Storm LS 1c controller; top red number is power level; bottom number is color temp (multiply * 100)

Aputure Light Storm LS 1c controller; top red number is power level; bottom number is color temp (multiply * 100)

The first time I set up my new LED panel for a shoot I realized I didn’t have a good procedure worked out for matching the color temperature of the ambient light. My new panel is the Aputure Light Storm LS 1c which is a bicolor panel. I chose it partially for the ability to set any desired color temperature between 3200K and 5500K. I like having the flexibility to dial in the color temperature to match the ambient light without having to bother with gels. I had figured it would be easy to do with the new panel but I hadn’t actually thought through the workflow.

Read More

Excerpts from a Dance Show

David Salahi Uncategorized 1 Comment

These are excerpts from a dance show I shot in December 2015. The entire show was about an hour and fifteen minutes long with a total of 30 dances/songs.The footage was shot on a Panasonic GH4 with a Panasonic 35-100 mm f2.8 lens from the back of the auditorium about 200 feet from the stage. Shot in 4K with the Cinelike D profile. ISO was at 1000 for most of the show. Edited & color corrected with Premiere Pro CC, reframed to HD (1920 x 1080); uses some of the Red Giant Universe transitions.

The dance show was staged by LonDance, a dance studio in Laguna Niguel where I recently started taking lessons. In addition to being terrific dancers the instructors are also excellent teachers. And they’re unfailingly patient—especially important for an uncoordinated student like me!