Southern Ocean Topography

During my postdoc at SIO, I have been working with Paola Cessi to understand how interaction of the ACC with large topographic features affects the turbulent equilibration of the Southern Ocean stratification. We have finished writing up our results and have just submitted a manuscript to the Journal of Physical Oceanography.

R. Abernathey and P. Cessi. Equlibration of circumpolar currents with and without topography. Submitted to J. Phys. Oceanogr., 2013.

The paper compares simulations of an idealized circumpolar current with a flat bottom to ones with a topographic ridge. This movie shows the spinup and equilibration of two example cases.

 


Isopycnal Mixing in a Channel

I just submitted a paper to Ocean Modelling entitled Diagnostics of Eddy Mixing in a Circumpolar Channel, coauthored with David Ferreira and Andreas Klocker.

Download link: Abernathey, R., D. Ferreira, and A. Klocker, 2013: Diagnostics of Eddy Mixing in a Circumpolar Channel. Ocean Modelling, submitted.

The point of this paper is to compare many different diagnostics of mixing rates and to demonstrate the equivalence between Lagrangian diffusivities, passive tracer mixing, and the mixing of potential vorticity. The link to eddy potential vorticity fluxes is especially important because of the connection to the meridional overturning circulation. Although the flow is idealized, we hope this work can be useful in the context of the DIMES experiment. The results can be summarized by this figure, which compares the vertical profile of the different diagnostics.

A figure from our recently submitted paper.

A figure from our recently submitted paper.

Abstract:

Mesoscale eddies mix tracers horizontally in the ocean. This paper compares different methods of diagnosing eddy mixing rates in an idealized, eddy-resolving model of a channel flow meant to resemble the Antarctic Circumpolar Current. The first set of methods, the “perfect” diagnostics, are techniques suitable only to numerical models, in which detailed synoptic data is available. The perfect diagnostic include flux-gradient diffusivities of buoyancy, QGPV, and Ertel PV; Nakamura effective diffusivity; and the four-element diffusivity tensor calculated from an ensemble of passive tracers. These diagnostics reveal a consistent picture of along-isopycnal mixing by eddies, with a pronounced maximum near 1000 m depth. The only exception is the buoyancy diffusivity, a.k.a. the Gent-McWilliams transfer coefficient, which is weaker and peaks near the surface and bottom. The second set of methods are observationally “practical” diagnostics. They involve monitoring the spreading of tracers or Lagrangian particles in ways that are plausible in the field. We show how, with sufficient ensemble size, the practical diagnostics agree with the perfect diagnostics in an average sense. Some implications for eddy parameterization are discussed.


Scripps!

I can’t believe three months have already passed since I arrived at Scripps Institution of Oceanography to begin my postdoc with Paola Cessi. I feel like I’m finally starting to get settled after a very tumultuous summer that included, driving across the country, sleeping on Uriel’s couch, moving twice, and taking two separate trips back to the east coast. I’m looking forward to staying put for a while and getting deep into some new projects.

San Diego is pretty!


Global Mixing Paper Submitted

I have finally managed to submit this paper to JGR-Oceans. The paper represents our attempt to use passive-tracer-based methods to diagnose mixing rates globally. The basic method was developed by John, Emily Shuckburgh, Helen Jones, and Chris Hill in a 2006 paper. Here we attempt to apply this method globally to produce a map of mixing rates everywhere.

Abernathey, R. and J. Marshall, Mixing of Passive Tracers by the Surface Geostrophic Flow, J. Geophys. Res., submitted, 2012.

The results can be summed up by this figure, a global map of passive-tracer mixing rates.

Read the rest of this entry »


How I Made the Channel Movie

I get a lot of questions about how I made this video:

There is obviously more to it than just model output. The video incorporates visual elements such as arrows and wind barbs to illustrate the surface forcing and uses camera motion to reveal different aspects of the circulation. In my opinion, the coolest thing about the video is the transition from the cartoon schematic to the 3D view that happens right at the beginning. This makes a strong impression during a presentation when the audience thinks they are looking at a still figure and then realizes it is a 3D movie!

It was not easy to make this movie. I literally spent weeks on it. I have decided to share my process in hopes that it will be useful to other people. I encourage you to leave comments below!

The basic steps were as follows. First I ran the model to output the necessary data. Then I generated images of the temperature fields in Matlab. I them assembled the 3D view and created the supplementary graphics in Apple Motion, a professional motion graphics program. Finally, I compressed the video with the H.264 codec using Apple Compressor.  Below I go over these  steps in more detail.

The Model Output

In order to generate a high-quality animation, you need high-resolution data to begin with. In particular, you need to output data from your model with a frequency compatible with the video frame rate you intend to use. Most video formats use either 24, 25, or 30 frames per second (fps). I prefer 24 fps. The goal is to output a data snapshot for each frame; the frequency of output will  determine how “fast” time runs in the movie. For the movie above, which is 24 fps, I output a snapshot of the temperature field every  two days. This means that it takes 7.6 seconds to show one year of simulation, a reasonable time for this particular simulation. I also decided to output the spinup of the model from rest, rather than simply showing an equilibrium state, since this is by definition more dynamic, and therefore more exciting to watch.

The Matlab Stage

The next step is to read the data into Matlab (or python, or whatever program you use prefer to use to analyze model output) and render figures for each frame. Presumably you know how to make contour or pcolor plots of your data already. The important point here is to recognize that a “movie” is nothing more than a sequence of these plots, saved in image format. At this stage of the processes, where we are not concerned about file size or bandwidth limitations, I choose to generate enormous images.

For this movie, I generated 3 filled-contour plots for each timestep: one of the surface (x/y), a side cross section (y/z), and a front cross section (x/z). I caused the axis to fill the figure, and I made all axes elements (ticks, lines, etc.) invisible. This left just the pure colors of the contour plot completely filling the figure. I output each image as a .png file, a lossless format which is much better than .jpg for this purpose. I think the dimensions were 2000 x 4000 pixels for the surface and side views. Huge! I wrote a script to save these figures using sequential numbering, i.e. top_00001.png, side_00001.png, etc. This makes the next step much easier.

Assembling in Apple Motion

This is the step that is  unfamiliar to most scientists. Apple Motion is a professional motion graphics program designed for artists / animators / creative people. (Adobe After Effects is a competing product.) These are basically graphic design programs, like Illustrator or Photoshop, with a timeline, allowing the elements of the design to evolve and change with time. You can also move the virtual camera around your scene to show different views. You manipulate everything visually. To give you a sense of how the program works, here is a screenshot:

Motion is able to interpret sequences of images as movies, which you can then arrange and manipulate in 3D space. Each of the cube faces I generated in Matlab appears as a flat, two-dimensional plane in Motion. I arranged these faces into the shape of a cube and voila, a 3D animation.*  In fact, it was very tedious figuring out how to do this. Trial and error was my method for learning how to use the program.

Once I had the basic 3D geometry of the channel figured out, I added the extra elements such as spinning heat-flux arrows and wind barbs to represent the surface forcing. I think these add a lot to the animation. Motion has some cool built-in effects that helped with generating these elements. Again, it took a lot of experimentation (and reading of the manual) to figure out how to make things look how I wanted.

The final step was to figure out the timing and camera movement. This basically comes down to subjective, artistic choices about how I wanted the movie to unfold.  The hardest part was deciding what I wanted to happen, and what views were the most interesting. Then I just dragged the camera around to make it happen.

I worked the whole project in 1080p HD format, and I output the movie from Motion as an Apple ProRes 12-bit Quicktime video. This produced 2 GB file for a one-minute movie. The image quality was impeccable, but obviously this is unsuitable for online use. Which leads to the final stage…

Compression

I host my videos online with Vimeo. I am a huge fan of this site for many reasons. Vimeo provides very clear guidelines about how to optimally prepare your video for uploading. But even if you don’t plan to use Vimeo, these guidelines are a great reference. They key to it all is the magic H.264 codec, which reduces HD video to manageable bit rates while maintaining high quality. I used Apple Compressor to compress my video, and in the process I also down-scaled it to 720p resolution. The final file was only 47 MB. Quite a reduction!

There are alternatives to Compressor for H264 encoding. The most prominent is ffmpeg, an open-source, all-purpose video toolkit. As with many open-source alternatives, it may require a bit more effort to get things working properly.

I hope this post is useful for someone. I welcome any feedback you might have.


Welcome to my new blog

This is a place for me to post ideas and work in progress. As such, a lot of the content is private and hidden from public view. If you have an account you can log in to see the hidden posts.