Posts in Category: Software

Innovating in a corporate world

Luck had it that I got hold off a free registration for the SDForum’s Open Innovation and Research Fair in Santa Clara, CA on September 18, 2009.

Being so close to home, I managed to shuffle my work schedule around and attend the morning part of the event to listen to the keynote speakers talking about how innovation happens in their corporations.

Albeit the conference was not very technical and I clearly was not the typical attendee (senior and manager are not part of my job description :-)), I find this events rather interesting and a good opportunity to step back from the computer screen and the news reader and look at things from another perspective.

The keynotes were delivered by representatives from Nokia, HP, IBM, Paypal, Forbes and EMC and it was clear that every company has a different perspective, or approach, on innovation. The ones I got the most value from were:

  • Nokia: John Shen leads the Systems Research Center at Nokia Research Center in Palo Alto, CA. John’s talk covered Nokia’s approach to innovation from a very academic perspective. Innovation is only possible when there is a culture, people and vision towards innovation. To innovate, you need to hire the best! But what defines the best? For John, it is someone (with at least one PhD :-)) that can grasp not only the science behind the research topic, but also quickly create a marketable solution for the innovations created. At the end of the day, the decision makers need to run the business and please the stock holders, so it is important that a research team which just came up with the most exciting technological innovation can also create the most exciting business plan to convince the decision makers. “Innovation is the intersection of business and technology.”
  • IBM: Deborah Magid, from the IBM Venture Capital Group. Being that IBM has several research centers across the world, including one right here in San Jose, I was expecting someone from research would come and explain what makes research at IBM special and responsible for the company’s extensible list of technological innovation achievements in the last century (yes, getting close to the 100th anniversary of incorporation). Instead, I got to learn how a complete different part of IBM, the Venture Capital group, drives innovations mainly through partnerships in the field. It basically consists of creating innovative solutions out of existing technological components/solutions provided by partners. The most recent example of such partnership is the focus on Smarter Planet. Through a diverse partnership, from utilities companies, to semi-conductor producers to software providers, IBM is creating solutions that will provider smarter management of cities, food, healthcare, water, traffic, etc…
  • HP: Rich Friedrich, the Director of the Strategy and Innovation Office at HP delivered a keynote on how his company is applying the concept of Open Innovation to amplify and accelerate research results. Through research collaboration programs with universities, HP has been able to invest in University research and then successfully monetize on the innovations created. Definitely a good example of collaboration between industry and academia. A couple of interesting notes from the session are OpenCirrus, an open cloud computing research testbed and the fact that when you manage innovation inside a large company, your new projects need to generate billions to have any significance in the overall company revenue and be considered successful.

The message from these keynotes is that there is no secret or standard recipe to create innovative technologies. It can range from a very science focused internal research strategy, to an shared activity between a company’s research departments and the academia, and it can even happen outside of research, when you join technologies together to create something better than the sum of the parts.

And a honorable mention to University of Minho, Braga, Portugal, that has been participating in the HP Open Innovation program doing research projects on cloud computing.

EclipseDay 2009 at the Googleplex

Last Thursday, August 27th, Google kindly hosted the second edition of the EclipseDay at the Googlepex. The event was a full day of technical sessions on Eclipse. Having been an Eclipse tools developer for the last 1-year, this event was an excellent opportunity to improve my Eclipse skills, network with other Eclipse developers and attend a series of excellent talks.

The agenda divided the sessions in two tracks and I selected the following sessions to attend:

  • Eclipse in the Enterprise: Lessons from Google, by Terry Parker and Robert Konigsberg from Google: the day started with this joint keynote session from Terry and Robert, where they described how Eclipse at Google evolved from the days when it was just used by a small group of developers and any integration with Google’s building system was done through external scripts and some manual hacks, to today’s enterprise deployment of a customized Eclipse environment with complete integration with the build system, in addition to usability/functionality enhancements to automate some developers’ tasks.
  • OSGi for Eclipse Developers, by Chris Aniszczyk from EclipseSource: I enjoyed all the sessions, but this was the best session of the day for me! After reading a few tutorials about OSGi and getting the feeling that I was missing something about it, Chris presentation finally nailed it for me! The fact that he started by explaining OSGi without mentioning Eclipse helped a lot, as previous tutorials I had read mixed both OSGi and Equinox and I ended up not understanding what was pure OSGi and what was Eclipse. The session covered several topics, as the OSGi services architecture, bundles, life cycle and even how you can run BUG modules on OSGi :-) Overall a great session, and it was good to talk to him later during the break and discuss OSGi a bit more.
  • Developing for Android with Eclipse, by Xavier Ducrohet from Google: Xavier is the lead for the Android SDK and in this session he covered several aspects of the SDK, from a quick start with the SDK to issues they have found while developing, it the most common used features and opened the curtain on some new features to come. Xavier also covered some of the issues/limitations with using the emulator instead of a real device.
  • Deploying Successful Enterprise Tools, by Joep Rottinghuis from eBay: based on his experience with leading an Eclipse tools team at eBay, Joep described the process of building, deploying and supporting tools in the enterprise level. The session was not specifically focused on Eclipse but nonetheless offered very valuable insights on the challenges one encounters when deploying tools in the enterprise, from early adoption to user feedback, maintenance, documentation and support, besides the effort necessary to come up with new and improved functionality.
  • Build and Provision: Two Sides of the Coin We Love to Hate, Ed Merks, EMF lead: after having read several chapters from EMF: Eclipse Modeling Framework (2nd Edition) just a few months ago, I really wanted to attend Ed’s talk. After all, I have been using a fairly decent dose of EMF lately and it is always great to hear the leaders in the technologies we use.  However, as the title suggests, the session was not about EMF but about the Eclipse’s build and provision system. Ed’s presentation focused on how the build is an essential tool to any project and, at the same time, not generally liked by most of the participants in the project, (especially) due to the constraints caused when the build breaks. Then he went on to explain the Eclipse build process, together with the efforts that have been done on automation and provisioning.

Overall, it was a great day! Although 3 sessions focused on build/provisioning/deployment, each one of them tackled different aspects of the process and contributed with valuable insights. Another thing I would like to note is the quality of the speakers and their sessions! No product or marketing pitch, just pure technical joy :-)

As of this writing, the slides are available online at http://wiki.eclipse.org/Eclipse_Day_At_Googleplex_2009#Presentation_Slides_.26_Videos

Thanks to the organizers for EclipseDay 2009, and hope to join you again in 2010!

Debugging Ant tasks in Eclipse

Today I came  an issue that required me to debug a custom Ant task that we have. While the Eclipse integrated debugger allows you to step through the targets and tasks in the build.xml file using the action Debug As -> Ant Script, it doesn’t actually let you step into the java class that implements the task. This is a major drawback, as most of the complexity (and issues :-)) tend to be in the task implementation code.

After searching around for a bit, I came across the Eclipse Remote Debugger debug configuration. This configuration allows you to remotely debug applications, by establishing a JDWP (Java Debug Wire Protocol, part of JPDA – Java Platform Debug Architecture) connection between the running application and the debugger. After learning about this, setting up the environment to debugg Ant tasks from within Eclipse was pretty straightforward.

The first step is to setup the Ant script launcher to run in debug mode and attach to the debug server. The following should be used as JRE arguments for the Ant configuration:

-Xdebug -agentlib:jdwp=transport=dt_socket,server=y,address=8000

You can setup your Ant configuration by going to Run -> External Tools -> External Tools Configurations and creating a new Ant Build configuration. Inser the location of the build file in the main tab and setup the arguments in the JRE tab:


After the Ant configuration is setup, we need to take care of the remote debugger configuration. Go to Run -> Debug Configurations… and create a new configuration for Remote Java Application. Make sure you set the same port number that you used in the Ant configuration, and you are good to go.

Now, place the breakpoints in your build.xml and java classes. In order to debug, you need to first launch the Ant script and then attach the debugger to it. Do right click in the build.xml file and select Debug As -> Ant Script. Then go to Run -> Debug Configurations… select the Remote Ant Debugger and click Debug. The debugger will now attach to the running proccess and let you step through both the xml file and the java classes:

Have fun!

Search for the XML Superstar

IDUG (the International DB2 Users Group) is sponsoring a worldwide contest initiative called The XML challenge – Search for the xml superstar. This contest aims to recognize developers (students or professionals) that create XML solutions, in one of the following categories: Video, Gadget, Query, PortableApp and XML Contest.

They are offering thousands of dollars in prizes, including Wiis, Zunes, iPods, Conference passes, Notebooks, GPS, etc…

If you live in US, you can submit your Video and Gadget entries until December 16th and 17th, respectively. The XML programming contest has also started and submissions will be accepted till January 31st.

For other countries, keep checking the website xmlchallenge.com for updates on your local contest.

Monitor calibration in Linux

If you are into photography, then you should already know that calibrating your monitor is something really important if you plan to print your pictures. I recently printed some photos and noticed that the printed colors were considerably different from the colors on my LCD. After comparing my monitor with a few others, it was obvious that mine, and some of the others too, were not color calibrated.

Monitors can be calibrated to display the “correct” colors by using a calibration device, complemented by the vendor software. I decided to buy the Spyder2Express since it had good reviews and a reasonable price. Unfortunately, there is no color calibration device that is supported on Linux by its vendor and I currently use Linux (openSuse) as my main operative system. I could use my work laptop running windows to calibrate the monitor, then export the color profile and import it in Linux. There is an article here on how to do it. With help from one of the comments in that article, I found about Argyll Color Management System. Argyll is a monitor calibration software package for Linux that supports most of the existing calibration devices. It uses the Windows binary software to create a Linux binary in order to be able to communicate with the device to run the calibration tests and create a color profile. It also provides an utility to apply the color profile to your monitor.

The steps to calibrate your monitor in Linux using Argyll are pretty simple. In my case, using the Syper2Express device, all I had to do was runt the following commands(as root, or give the user permission to communicate with the usb device):

$ cd Argyll_V1.0.3/
$ ./spyd2en -v /media/ColorVision/setup/setup.exe (creates binary file to communicate with device)
$ ./dispcal -v -y l -o MyMonitor (runs calibration tests and creates monitor color profile)
$  ./dispwin MyMonitor.icc (applies color profile to monitor)

The generated color profile can also be imported to your post processing software like GIMP in order to use the monitor color profile instead of using a more common profile like sRGB or Adobe RGB.

Batch update of EXIF info

exiftool is a very useful utility that allows you to query or edit EXIF information from pictures. Going through some of my pictures recently, I have found some sets of pictures with incorrect date in the EXIF info, be it a few hours off or even 1 year and a few days off. Fixing them one by one was not an option, so after a couple of searches on google I found exiftool. It is a command line utility that lets you edit EXIF information and you can do it in batches of files or folders, even using conditions to specify when a change must be applied.

For me, the need was only to change some photos info so that it shows the real time they were taken, three hours before the saved setting. The command was as simple as this:

exiftool -DateTimeOriginal-='0:0:0 3:0:0' -CreateDate-='0:0:0 3:0:0' myfolder/

The format for date changes is ‘YYY:MM:dd hh:mm:ss’.

Commuting in San Jose and IT podcasts

After three years living in San Jose, I was proven wrong on my assumption that it was impossible to commute in San Jose using public transportation. After careful examination of VTA time tables, and several trial and error attempts, I am now commuting to work, having found some good things about commuting:

  • usually it takes me 40 mins door-to-door. Not too bad, considering that the driving time was something between 15-30 minutes, depending on the luck with the 21! traffic lights between home and work (other than that, the drive was a monotonous 9 miles straight, right turn, 1 mile, arrived);
  • instead of losing 30-60 minutes driving to work every day, I actually found myself with 1h20 of free time to do some productive stuff like reading or listening to podcasts.

Of course there are always drawbacks:

  • the bus runs every 15 mins, light rail every 12 and shuttle bus to IBM about every 30 mins. If I get it right, I get from one place to the other in 40 mins. If I get it wrong a.k.a, lose the first bus or lightrail, it takes at least 30 mins more (have to catch next shuttle, if I get there on time).

But the goal of this post is not really to dicuss about VTA’s schedule, but instead about podcasts, more specifically, IT podcasts. I have been alternating between reading and listening to podcasts during the trip, and the podcast I have been listening to is Software Engineering Radio. It is an excellent podcast, and I’m still in episode 30, so have a lot to go (they have 103 as of today :-) ). However, I’m looking for other series to mix up with this one, and also because I might skip some of the episodes that are not of my interest.

So, what IT podcasts do you listen to and what do you like about them? I recently added two series to amarok, but haven’t listened to any episode yet:

Don’t forget to leave your comment and let me know what are your favorite IT podcats! :-)

Mylyn task manager

When I migrated my development environment to Eclipse 3.4 Ganymede, one of the things that caught my attention on Eclipse’s update website was a plugin called Mylyn. A visit to the website, a look over the webcast and it sounded something promising.

It definitely is! Mylyn is a task manager that changes your IDE context based on tasks. You create a task, add resources to its context and when you activate the task, it hides all the other (unneeded) resources from your views (project/package explorer, outlines, editors, etc…). It provides integration with several task repositories, like Bugzilla and Trac. Unfortunately, it doesn’t provide a connector to Clearcase, but I’m still able to use it in an automated way.

I find the tool really awesome when I do something basic like switching tasks: it just closes all the editor windows and projects in the explorer for the task I’m leaving and opens all the files I was working on for the task I’m switching to. This would take me several minutes to do by myself, so having a tool that does that in 1 second is pretty neat!

Here are some more things I like about Mylyn:

  • ability to customize CVS/SVN checkin comments based on templates. Most, if not all, of my commit comments are in the form of “Bug#xxxxx: 1 line description of the bug and fix”. With Mylyn, I can get the comment populated automatically with information from the associated task.
  • when I (re)activate a task, it positions the cursor in the file and method (if java file) I was working on.
  • I can use the URL feature to link to the Clearcase defect page for each one of my defects
  • dynamically adds resources as we follow method references
  • Mylyn filters unrelated content from all views, but I especially like the end result for the package explorer and outline view. When working with classes that have tens of methods, showing just a handful of them in the outline simplifies things a lot!

And the things I don’t like that much:

  • no connector to Rational ClearCase. I have to copy some notes and Defect info from Clearcase to my task manually.
  • the option to show filtered content only shows content at the same level. I would like to have a “show all” option for when I need to look for some resource.
  • It slows down the system a bit. Not too bad, but I do notice it when I have several eclipse instances running.

Overal, I think Mylyn is a great tool and very useful! Even more if you are working with Bugzilla/Trac projects.

If you want to give it a try, this is Mylyn’s homepage and this is a Mylyn tutorial.

 

JDBC performance tips

If you are into java and database development, you will find this article to be a gold mine: http://www.javaperformancetuning.com/tips/jdbc.shtml

It contains links and summary to tens of other performance articles related with java database application development.

SQLJ and JDBC

As a follow up on my last post comparing Static SQL with Dynamic SQL, I will now post an example of how to run the same code using static and dynamic SQL.

One of my visitors left a comment saying that the scope of static and dynamic SQL in Oracle is different than the one I mentioned. I am not familiar at all with Oracle, but was able to find some information on their documentation where they compare JDBC and SQLJ. Since their concept of static vs dynamic SQL is different from the concept in DB2, so my examples may not make sense for Oracle users. I also found out that although Oracle has had plans to desupport SQLJ in its data server, that support has been reinstated in their 10g release.

The two code samples I will show next are shipped with DB2 (get your free copy of DB2 Express-C) and can be found in the file %DB2FOLDER%/samples/java/sqlj/TbRead.java. I’ll just use one of the several examples in that file, that executes a sub-select statement in the employee table.

Sample code in SQLJ:

#sql cur7 = {SELECT job, edlevel, SUM(comm)
	FROM employeeWHERE job IN('DESIGNER', 'FIELDREP')GROUP BY ROLLUP(job, edlevel)};
while (true){
	#sql {FETCH :cur7 INTO :job, :edlevel, :commSum};
	if (cur7.endFetch()){
		break;
	}
	System.out.print("Job: " + job + " Ed Level: " + edlevel + " Tot Comm: " +commSum);
}

Sample code in JDBC:

Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT job, edlevel, SUM(comm) "
	+"  FROM employee "
	+"  WHERE job IN('DESIGNER','FIELDREP') "
	+"  GROUP BY ROLLUP(job, edlevel)");
	while (rs.next())
	{
	if (rs.getString(1) != null)
		{
		job = rs.getString(1);
		edlevel = rs.getString(1);
		commSum = rs.getString(1);
		System.out.print("Job: " + job + " Ed Level: " + edlevel + " Tot Comm: " +commSum);
		}
	}

Although both styles present different syntax, from a developer’s perspective, the only main difference is than when using JDBC one needs to explicitly fetch the row values into Java variables one by one. A common comment from Java developers is that SQLJ is not really Java (one needs to use annotations instead of java method calls), so they prefer to stick with JDBC.

Like I explained in my previous post, the biggest difference between these two styles (static SQL using SQLJ and dynamic SQL using JDBC) is that the SQL statements in the SQLJ files need to be compiled and bound to the database ahead of runtime. The following diagram illustrates this process:

staticSQL.jpg

After the deployment process, SQLJ execution is simpler than JDBC. While JDBC statements need to be prepared at execution time, SQLJ statements are already compiled and ready to use. The two following diagrams illustrate these differences:

jdbcstatement.jpgsqljstatement.jpg

As you can see, static SQL execution process is much simpler, but it requires a complex deployment process. This is an aspect of database development where there is a clash between DBAs and Developers. While ones – the DBAs – prefer the much more refined security and execution control provided by SQLJ and static SQL, others – the Developers – prefer the easier development process of dynamic SQL in the form of JDBC.

Soon, I will talk here about a new Java Data Access platform that supports the usage of both static and dynamic SQL at runtime (through a JVM property), allowing DBAs and Developers to use dynamic SQL on development and test environments and going with static SQL on the production environment. This way, the development community will get the best of both worlds: ease of deployment during development and testing phase and greater performance and control on the production environment.

If you are looking for a data management and application development tool, you should take a look at the new IBM Data Studio. It is an eclipse-based development environment, free to download and to use and with support to all major RDBMS. Download IBM Data Studio.