SSD encryption – How does it work?

Quite many vendors brag about the AES-256 encryption capabilities of their SSD drives. This sounds good, everybody like to keep their data safe & secure. But how this encryption really works, what does it protected you from? Easy questions, right? Sure this is clearly documented by the vendors.

Well.. It is surprisingly hard to find out how the encryption in modern SSD drives actually works. You get certain details, but the full picture. Google around and you are likely to find more questions than answers.

The main question is: Is the password entered by user used to encrypt the actual AES encryption key that is used to encrypt the data on SSD?

The data stored on flash memory chips is always encrypted. The drive controller maintains the key and encrypts/decrypts data on the fly as it passes through. Intel white paper states that Intel 320 SSD drives are initialized with unique encryption keys at the factory. User can trigger a generation of new encryption key through secure erase or enhanced secure erase procedure. The white paper does not give answer to your main question. Is the encryption key encrypted with the password entered by the user.

Obvious counter question is: why would they implement the AES encryption on the drive if the key is not encrypted with user entered password? Answer to this question is easy to figure out. Think about the situation when you want to discard old drive. With traditional hard drives you go and write zeros (or random data or random data multiple times, depending on your level of paranoia) on disk to remove existing content. With flash based storage this is not so easy. With hard drive you can write data over and over again on the exactly same spot on the disk platter. Not so with flash based storage. The flash memory consists of individual memory cells. Each memory cell supports only finite number of erase-write cycles. This means that at some point you can now longer write data to a specific flash memory cell. When computers use the hard disk, in many cases there are certain “hot spots” on the disk, think about the location of your swap file. Those  spots get constantly updates while other part of the disk are mostly just used to read data. Without some intelligence, the flash memory would quickly worn out from those hot spots. To combat this, the SSD drives use wear leveling algorithms. Instead of writing data always to the spot requested by computer/operating system, the drive actually decides how to evenly distribute the writes over the flash cells. SSD maintains internal mapping where data is actually written to so that during read operations it can recover it from correct place.

Now how does this relate to erasing data? The wear leveling means that operating system can never be certain that something it has written to SSD has been actually removed from there. Even if the operating system is trying to write at the very same spot on the disk to remove existing data, SSD could directing these writes to some other place. In some cases it could be that the flash cells have gone through their limited number of program-erase cycles and can’t be erased no matter what. These things pose a problem for securely removing the data. AES encryption to rescue! Remember the data written to the flash memory in encrypted by a key that is maintained by the drive. Instead of erasing actual data, we can simply erase the key. Once the key is gone the data is useless. Does not matter if somebody is able to recover the encrypted data from the flash chips in their laboratory. Since the encryption key no longer is, there is no way to decrypt the data.

Now back to original question. Let’s approach it with another question. Could the drive use the password entered by the user and not use it to encrypt the main encryption key. The obvious answer is yes. The drive you could implement a simple system where the password is stored in hashed form on the disk, when user enters it the drive would hash it again, compare to the hashed form and only allow access if correct password is given.

But how does the SSD actually work, is the ATA password entered by the user actually used to encrypt the AES encryption key or is the password just being used for traditional access control? After spending some quality time on Google, I finally found the answer from Intel Communities.

On April 8th, 2011 6:29 “Scott” from Intel Corporation has answered to this specific question: “Yes, ATA password is used to encrypt the encryption keys stores on the SSD.”  (This answer is related to Intel 320)

So there you have it. Of course this only applies to the specific Intel drive and it is just a comment on a discussion forum – hardly an official stament. It is also interesting that I could not find any white paper or more official documentation on the subject even though this is very important topic.

Now this is just a beginning. At least with Samsung 840 Pro drives there is already discussion about their encryption requiring TPM support from the system. Once again, very difficult to find any official documentation about the topic but it could be related to the OPAL specification. To read more about OPAL, check this presentation.

Posted in Security |

Get rid of NaN and INF values in Orbeon

With calculations in binds it is easy to run into NaN (Not-a-Number) or INF values showing on the form.

There are few ways to get rid of them

  • You can use if inside the calculation to prevent the calculation from running if the inputs would result in NaN or INF.
  • Use the relevant attribute in your bind expression to hide the field if the value is not numeric. You should be able to use something like not(number(myNode)) as the condition.
  • use the xxforms:format attribute together with translate to simple translate NaN and Infinity values to empty strings.
    <xforms:output .. xxforms:format="translate(translate(., 'NaN', ''), 'Infinity', '')" />
    
Posted in Orbeon |

Orbeon forms processsor for extracting locale from request

A simple processor for Orbeon forms that extracts locale information from request.

package fi.iki.juhap.xpl;

import java.util.Locale;

import org.dom4j.Document;
import org.dom4j.DocumentHelper;
import org.dom4j.Element;
import org.orbeon.oxf.pipeline.api.ExternalContext;
import org.orbeon.oxf.pipeline.api.PipelineContext;
import org.orbeon.oxf.processor.ProcessorInputOutputInfo;
import org.orbeon.oxf.processor.SimpleProcessor;
import org.orbeon.oxf.xml.TransformerUtils;
import org.xml.sax.ContentHandler;
import org.xml.sax.SAXException;

/**
 * Extract locale information from request.
 * 
 * Example output: 
 * 
 * <locale>
 * 		<language>en</language>
 * 		<country>US</country>
 * 		<variant />
 * </locale>
 *
 */
public class RequestLocaleProcessor extends SimpleProcessor {
	public RequestLocaleProcessor() {
		addOutputInfo(new ProcessorInputOutputInfo(OUTPUT_DATA));
	}
	
	public void  generateData(PipelineContext context, ContentHandler contentHandler) throws SAXException {
		ExternalContext externalContext  = (ExternalContext) 
				context.getAttribute(PipelineContext.EXTERNAL_CONTEXT);
		
		Locale locale = externalContext.getRequest().getLocale();

		Document document = DocumentHelper.createDocument();
		Element root = document.addElement("locale");
		root.addElement("language").addText(locale.getLanguage());
		root.addElement("country").addText(locale.getCountry());
		root.addElement("variant").addText(locale.getVariant());
		
		TransformerUtils.writeDom4j(document, contentHandler);		
	}
}

In order to use this you need to add it to processors-local.xml file (WEB-INF/resources/config) like this:

<processors xmlns:jp="http://fi.iki.juhap/processors">
    <processor name="jp:request-locale">
        <class name="fi.iki.juhap.xpl.RequestLocaleProcessor" />
    </processor>
</processors>

Usage very simple. In XPL you need to declare the namespace and then use the custom processor as any other.

<p:config xmlns:p="http://www.orbeon.com/oxf/pipeline"
          xmlns:jp="http://fi.iki.juhap/processors">
..
	<p:processor name="jp:request-locale">
		<p:output name="data" id="request-locale" />
	</p:processor>
..
</p:config>
Posted in Web development |

Orbeon, get-request-attribute and “Content is not allowed in prolog”

“An Error has Occurred Fatal error: Content is not allowed in prolog.”

Not one of my favourite problems as it is sometimes difficult to figure out what is the actual cause. In general the “content is not allowed in prolog” indicates there is some content in the xml file or stream before the <?xml …> declaration.

The error can easily happen if Orbeon thinks something is XML when it’s not. I’ve run into this problem couple of times with xxforms:get-request-attribute function. By default Orbeon seems to think the attribute probably contains XML data, tries to parse it and then runs into the problem if it is not actually XML.

Luckily this is easy to fix. Just pass in the content type in second argument:

xxforms:get-request-attribute('MY_VAR', 'text/plain')
Posted in Orbeon |

Debugging Orbeon XPL programs

Orbeon XPL is way of describing processing flows using XML syntax. Since there is no debugger the debugging mostly happens by putting something in and looking at what comes out.

You can always get the output from pipeline by using the “debug” attribute with processors, but reading the information from log files can be cumbersome if you have many steps. I’ve found it easier to put the output from certain processors to separate files on disk. I then keep the files open in editor that reload changes automatically. This makes it quick and easy to see how the changes I have made to XPL have affected the results.

To do this I have created a simple processor that writes the file to certain location.

write-to-file.xpl:

<p:config xmlns:p="http://www.orbeon.com/oxf/pipeline"
	  xmlns:oxf="http://www.orbeon.com/oxf/processors">
	<p:param type="input" name="data" />
	<p:param type="input" name="setup" />

	<p:processor name="oxf:xml-converter">
	    <p:input name="config">
	        <config>
	            <encoding>utf-8</encoding>
	        </config>
	    </p:input>
	    <p:input name="data" href="#data"/>
	    <p:output name="data" id="converted"/>
	</p:processor>
	<!-- Comment out the following processor to disable debugging -->
	<p:processor name="oxf:file-serializer">
	    <p:input name="config" href="#setup" />
	    <p:input name="data" href="#converted"/>
	</p:processor> 
 </p:config>

That processor is used from my main XPL by connecting it to interesting outputs (below the #xforms).

<p:processor name="oxf:pipeline">
	<p:input name="config" href="write-to-file.xpl"/>
	<p:input name="data" href="#xforms" />
	<p:input name="setup">
		<config>
			<directory>c:/temp</directory>
			<file>xforms.xml</file>
		</config>
	</p:input>
</p:processor>
Posted in Orbeon |

Orbeon XPL copy values from request

Quite often I end up in situations where I need to use a value from HTTP request (either parameter or header) inside a configuration element for another Orbeon XPL processor.

One fairly simple way to do this is to use oxf:xslt processor as shown below:

<p:config xmlns:p="http://www.orbeon.com/oxf/pipeline">
	<!-- Read in request headers -->
	<p:processor name="oxf:request">
		<p:input name="config">
			<config>
				<include>/request/headers/username</include>				
			</config>
		</p:input>
		<p:output name="data" id="request" />
	</p:processor>

	<p:processor name="oxf:xslt">
		<p:input name="data" href="#request" />
		<p:input name="config">
			<config xsl:version="2.0">
				<orgUnit>A</orgUnit>
    			<username>
    				<xsl:value-of select="//header[name='username']/value"/>
    			</username>
			</config>
		</p:input>		
		<p:output name="data" id="config" />
	</p:processor>
</p:config>

The example code will take the “username” request header and combine it with the XML fragment:

<config>
	<orgUnit>A</orgUnit>
</config>

to create

<config>
	<orgUnit>A</orgUnit>
 	<username>...</username>
</config>

Another option could be to use the aggregate and xpointer functions.

Posted in Orbeon |

Customizing Liferay service builder templates

Liferay service builder code generation is based on Freemarker templates. The default templates come packaged inside the portal jar files. You can take a look at them for example in Github (note that those are from master branch – if you want to make modifications you should probably locate template files that correspond to the version of your Liferay setup). Another option to see the templates is to locate the portal-impl.jar and extract template files from there. The templates are in com/liferay/portal/tools/servicebuilder/dependencies package/folder.

In order to make service builder use your modified template you need to make the template available and then change the ant build scripts to make use of it.

1. Create sb_templates directory underneath liferay-plugins-sdk -root folder and place your modified template there. I suggest using the original template file name to keep things simple.

2. Modify the ant build script to override the default templates with your own. The related build file is build-common-plugin.xml (located in liferay-plugins-sdk folder). Look for the “build-service” target. There you need to make two changes

2.1 Locate the path with id=”service.classpath”. The service builder will use this path to locate the templates. Add following highlighted line as the first item

<path id="service.classpath">
    <pathelement location="${project.dir}/sb_templates" />
    <path refid="lib.classpath" />
	<path refid="portal.classpath" />
	<fileset dir="${app.server.lib.portal.dir}" 
          includes="commons-digester.jar,commons-lang.jar,easyconf.jar" />
	<fileset dir="docroot/WEB-INF/lib" includes="*.jar" />			
	<pathelement location="docroot/WEB-INF/classes" />
</path>

2.2 Modify the java task, adding arguments like the following line for each template you want to override

<java
	classname="com.liferay.portal.tools.servicebuilder.ServiceBuilder"
	classpathref="service.classpath"
	outputproperty="service.test.output">
	<arg value="-Dexternal-properties=com/liferay/portal/tools/dependencies/portal-tools.properties" />
	<arg value="-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger" />
	<arg value="-Dservice.tpl.service_clp_serializer=vt_service_clp_serializer.ftl" />					
	<arg value="service.input.file=${service.input.file}" />
	<arg value="service.hbm.file=${basedir}/docroot/WEB-INF/src/META-INF/portlet-hbm.xml" />

3. Execute service builder. Remember to check the output to see if there are some errors in your templates. If your modified template is not found, service builder may just skip generating the file. This can be confusing if you generating on top of existing files. If you suspect the files is not being regenerated compare the last modified timestamp to other generated files.

Posted in Liferay |

Liferay service builder, class loader issues

When you create a new service with Liferay service builder that system generates a whole lot of boilerplate code based on the few lines you put into the service.xml. Part of the code deals with the problem related to passing objects between different web applications.

Think about a simple example where you have your portlet project and a hook project. In portlet project you have defined some services and in hook project you consume those. Both portlet and hook are deployed as separate webapps under the application server of your choice (probably Tomcat).

When you are calling a service from the hook project the service actually executes in the portlet webapp context. The results are then returned to the code that executes under the hook webapp context.

Sounds easy but it’s not. Each webapp has its own class loader. Same class loaded by different class loaders is not considered to be the same from JVM perspective. This means they are not compatible. If you take instance of MyModel that has been loaded under class loader B and try to cast it to MyModel that has been loaded by class loader A you will get ClassCastException. This is a problem for us. The service running in portlet project instantiates MyModel using its own classloader. The instance is then supposed to be returned to the Hook project but it has its own view of MyModel class (since it is using different classloader).

Liferay service builder deals with the problem with class loader proxies and with some “magic”. When you call the porlet service from the hook project you don’t actually get back the same instance of MyModel that was instantiated in the service. Instead the code generated by service builder instantiates a new model using the class loader from the hook project. Information from the model instantiated in the service is then copied to this new model instance and it is returned to the caller.

This works beautifully and is pretty transparent for the developer. As long as all necessary code is generated by service builder. If your service returns a class that is not generated with service builder you may run into trouble.

Let’s say you are implementing a service that does not actually access the database. Therefore you have a service in your service.xml that does not define any columns. Instead you want to create your own model class by hand and return that from the service. This becomes an issue, because all the magical code generated by service builder to deal with class loader issues only applies to model classes generated by service builder. There is an issue in Liferay bug tracker for this:

One way to go around the problem is to implement your own serialize – deserialize procedure in the ClpSerializer that gets generated by service builder. The process is pretty simple. You first serialize the class returned from the service and the deserialize it. Since the deserialization happens under the hook project class loader you end up with an instance that is compatible with other code in the hook project.

Below is a example of the serialize/deserialize process:

	public static Object translateOutputGeneric(Object obj) {
		try {		
			ByteArrayOutputStream bos = new ByteArrayOutputStream();
			ObjectOutputStream oos = new ObjectOutputStream(bos);	
			oos.writeObject(obj);
			ByteArrayInputStream bis = new ByteArrayInputStream(bos.toByteArray());
			ObjectInputStream ois = new ObjectInputStream(bis);
			return ois.readObject();
		} catch(IOException e) {
			throw new RuntimeException(e);
		} catch (ClassNotFoundException e) {
			throw new RuntimeException(e);
		}
	}
Posted in Liferay |

Debugging ClassCastExceptions in Eclipse

Working with application servers it is not uncommon to run into strange ClassCastExceptions which seem to have now reason. You are trying to cast an instance of MyModel to MyModel and yet them JVM is having issues with it.

Experience has shown that there are two main categories for these “strange problems”:

  1. Multiple versions of the same class are present in the classpath. This can easily happen if you for example accidently include some libraries in your WAR file that are also provided by the application server.
  2. There is only one version of the class, but it is being loaded by different classloaders. If different classloaders load the same class it is not actually the same class from JVM’s perspective. Instead you have two different classes and casting from one to another causes ClassCastException.

The easist way to dig into the second class of problems is to use Eclipse debugger. Usually you get the issues from code like this:

   MyModel model = myRemoteService.getObj();

To make debugging easier you can split the code into two parts. This way you get the returned value in local variable that you can inspect:

   Object obj = myRemoteService.getObj();
   MyModel model = (MyModel) obj;

The ClassCastException happens on the second line. This means that the class of obj is not compatible with MyModel. To dig into the issue, put a breakpoint on the line. When the debugger stops on the line, start looking into the obj and the MyModel class.

I found that the best way is to create new expression in the debugger to see the ClassLoaders associated with each class. To do this, bring up the “Expressions” view in Eclipse (you might need to add that from Window – Show view). CLick “Add new expression” and add two expressions like this

  • obj.getClassLoader();
  • com.mycompany.MyModel.class.getClassLoader();

Now you see constantly in the expressions view which class loaders were used to load the classes. In the value field you can see if they are the same (if they are, they have the same id number). Most likely they are not, since otherwise you wouldn’t be getting the exception. To understand the problem, start looking into the properties of the class loaders. If you are working inside servlet container such as Tomcat the class loaders have a property name “contextName” that tells which webapplication loaded the class. This can give hints about the underlying problem. Then you can see the jarPath, jarNames, canonicalLoaderDir etc.

All this does not magically solve the problem but once you understand the actual cause of ClassCastException, it is easier to fix it.

Posted in Eclipse |

Using git to deploy new versions

Simple instructions for setting up git so that you can push new versions to server from the comfort of your workstation. This is based on article by Abhijit Menon-Sen. If you need more details, take a look at that.

This will create the home directory for the project underneath /var. Git repository shall be put on the same place, in .git subfolder. If you are using this to push a website, you need to either move the .git folder to some other place or use .htaccess to make sure the .git folder is not exposed to world.

On server (replace your project name on the first line)

export PROJECT=<NAME OF THE PROJECT>
mkdir /var/$PROJECT
mkdir /var/$PROJECT/.git
git init --bare /var/$PROJECT/.git
cat <<EOF > /var/$PROJECT/.git/hooks/post-receive
#!/bin/sh
GIT_WORK_TREE=/var/$PROJECT git checkout -f
EOF

chmod +x /var/$PROJECT/.git/hooks/post-receive

On client, where you have the git repository run the following to configure:

git remote add live ssh://<username>@<myserver>:/var/<project>

Then use this to push the master to the server

git push live master
Posted in Web development |