Skip to content

Moving Your UI Code Into The Future with Qt and C++ - An In-Depth Look At Code Migrations of Graphical User Interfaces (GUIs)

In this blog post, I will look at ways to move outdated UI code into the 21st century, of course with a Qt focus, and based on KDAB’s more than 15 years of experience in migrations. You will learn about your options, when it makes sense to consider a migration as opposed to a complete re-write, and how you can go about getting your migration project kicked off.

At KDAB, we have about 30 years of code migration experience, and we have been involved in migration projects of most sizes, from 50,000 lines of code to over 8 million lines (for a total of over 20 million), for toolkits such as MFC, Motif, Photon, Tcl/Tk,Qt3, Qt4 etc. So we like to think we have collected a bit of experience on how to do this, and we are sharing some of that experience with you in this article.

Further, we have migrated from several different toolkits, including Motif, MFC, and Photon, and have always observed that there are more similarities than differences between GUI migrations.


Software has been written for many decades by now. But what few people realize is that even software with graphical UIs has been written for 25 years (or, in some cases, even more). And that clearly means that there is a lot of software out there with graphical user interfaces that have not kept up with the times.

That software could have been written for windowing systems that have disappeared (e.g. OpenLook, early versions of Windows, MacOS9), or using GUI libraries that no longer exist or are not supported any more (such as MFC and Photon) or GUI libraries such as Motif, that still exist but for which talent is hard to find these days, because they left the mainstream a long time ago.

Or, quite simply it could be software whose graphical user interfaces are insufficient to today’s demanding users who are used to a world of fluent animations, UIs changing according to context, responsive UIs, and much more.

Assessing the problem

If you find yourself in one of the situations described above, you need to take stock of all parameters that apply to your project. Does it still sell? Are you under any obligation to maintain (and evolve) it? Does it contain a lot of specially developed assets that would be costly to lose, for example complex algorithms? How long can you afford to be without a new release? Without a new feature release? After answering these questions, you might find yourself with one or more of the following four options:

Option 1: Abandon the software altogether It is not economical to maintain the old codebase any more, and you are better off spending your resources on something else. Sometimes, it is important to realize that you are better off by cutting the ties to the old software.

Option 2: String the old software along Make sure the few remaining employees who still know how it works do not leave, avoid new features where possible, fix bugs, and postpone the inevitable until later. Depending on your competitive situation, this may be an alternative that will keep cost at a minimum. The downside is that you keep accumulating technical debt, and that you become more vulnerable, to employees leaving, or to changes in the technology landscape (your old library may not run on more modern versions of operating systems any more, for example).

Option 3: Throw the old software away altogether Re-write everything from scratch, using modern development environments, modern libraries, modern coding techniques (chances are nobody had heard of templates or threads at the time the software was first written), modern UI paradigms.

Your team is going to love this. Most software developers prefer to cut the ties to the old and start a new greenfield development where they can apply all the modern techniques and skills and are not bound by technical decisions made decades ago, and that seem entirely wrong from today’s perspective. So there is a clear upside for this choice.

Unfortunately there is a clear downside, too. Of all options, this option tends to be the one that leaves you without a working and sellable product for the longest time, unless you also string the old software along at the same time, in which case your cost is going to at least double.

In addition, you will often find yourself having a highly motivated A-team working on the new software, and a B-team working on the old that tend to be unmotivated and eventually bound to leave. Again, whether this is a problem for you or not depends on your competitive situation. If your customers cannot or would not consider other vendors, you might just get away with letting them wait for two years until the next release.

There is also another risk in here. Your software is likely to contain modules that embody your domain knowledge, that were hard to write and hard to get right. Examples of this typically contain complex algorithms, data storage, import and/or export of somebody else’s data formats, network protocols, and so on. And do remember that writing a piece of code is the least amount of work involved in software development. Debugging it and making sure that it covers all corner cases is often many times more expensive than the initial writing. If you throw away all your code you are bound to make the same mistakes that your predecessor made when they wrote the code 20 years ago.

Option 4: Perform a careful surgery by keeping specially developed assets, and focus on replacing the outdated codebase In this scenario, you would identify the bits that you would like to keep, often the ones just described at the end of the previous scenario, and focus on replacing the outdated bits, typically the user interface.

This strategy often works surprisingly well because those bits that have stood the test of time and are still valid and useful (such as complex algorithms) often will continue to hold up even in future versions, often with little or no change. So if you can manage to separate out these modules (and if your software was designed well, you will, but even if it was not, this can be done), you can end up with less work, less time-to-market, and, all-important, less cost.

The downside of this approach is that you tend to need a higher than average skillset in your team since you need to understand both the old and the new. It might also be somewhat less appealing to your team to first have to perform the surgery that splits your software into the good bit and the bad bit. After all, the key skills of your team tend to be in your domain, let that be oil and gas exploration, medical imaging, or something completely different, and not in rewriting ancient code written with one library to use another toolkit.

To sum up: You will need to assess your situation, in particular your competitive situation, but also your team’s skillset and existing workload before you can make an informed choice.

Looking back at the history of our customer projects, option 4 (Perform a careful surgery by keeping specially developed assets, and focus on replacing the outdated codebase) has often shown to be the most favorable one, and it is this one that we consider a migration. But all options have been used by our customers at one point or another.

How to go about a code migration of a graphical user interface(GUI)

So by now, we have established that a code migration involves gradually bringing over your software written in older environments, such as MFC, Motif, Tcl/Tk, or many others to Qt. In fact, the previous environment does not even need to be “old”. We have customers who had good reason to migrate from e.g. Java to Qt and C++ (often performance or maintenance reasons). At KDAB, we have identified 8 steps or key areas to get a good start on your migrations, read on to learn about these.

Step 1: Identify the code that needs to be migrated We have previously established that there will likely be code that can or should be preserved, and other code that will need to be replaced. How much of either of the two of course depends entirely on the codebase in question.

How do you know which parts of the code can potentially be preserved? Well, your knowledge of the code certainly comes in here, you will likely have an idea which parts depend on the old libraries or frameworks that you need to get rid of, and which parts are just plain algorithmic, typically using nothing but standard C or C++.

But of course, it is always a good idea to let a computer verify your assumptions. At KDAB, we do that by trying to build (compile and link) the code on what we call a hostile platform. A hostile platform is a platform that does not support the framework or library that you want to get rid of. For example, if your codebase originally uses MFC, we will try to build it on a Unix or OSX system. If it originally uses Motif, we will try to build it on Windows. How far away the hostile platform needs to be depends on your intentions. Are you trying to get your software ready for the next decade, but do not need to support additional platforms? If so, just uninstalling (or hiding from the build) the framework might be enough. If not, better use something technically further away.

Of course, that build will fail, that is not the question. The question is: How much will it fail in different places? With any luck, there might be a few code files that are so platform-agnostic that they just keep compiling, though there likely are not going to be a whole lot of them. Others will just have a few simple problems, like different include files, standard library functions that moved into a different namespace in the meantime, or the odd string handling function that has different names on Unix and Windows. And then there will be the code files that just crash and burn loudly.

Carefully examine the compile output and rate each file and module. Do not make any changes just yet, even if they seem trivial, such as changing the name of an include file, otherwise you risk getting confused about your changes. (Of course, if you are unsure about whether a quick fix would be sufficient, you can test small changes, and then revert them.)

You can also use text searching tools to identify modules and files using the old framework by searching for characteristic substrings. For example, if you are migrating away from MFC, search for class names starting with a capital ‘C’, followed by another capital letter. If you are migrating away from Motif, search for strings starting with ‘Xm’, or ‘Xt’, or ‘X’ followed by another capital letter. But there are several risks with this approach: You may have many false positives (there could be other class names than MFC ones that start with a capital ‘C’, for example). Or you may have subclassed the classes in your old toolkit, and then only used your subclasses, so the naming convention of your old toolkit may be confined to a small amount of the code, but still there is a strong dependency on the toolkit scattered about elsewhere. In the end, it’s your compiler that decides whether your code compiles, not your text searching tool.

At the end of this exercise, you will have a much better idea of which parts in your code will need a lot of intervention, and which parts can remain largely unchanged. That should also give you an indication of the effort required to actually complete the migration.

Step 2: Get your application running, as quickly as possible, as often as possible At this point, you may feel ready to start the migration. You have a list of modules and files to work with, so you distribute work packages to your team, and they all work away happily on their assigned tasks. In three months’ time, you will all come back, integrate your work, and have a shiny new version of your application.

Well, no. Or rather, you would have to be lucky for that to work. Many years of software industry experience have shown that so-called big-bang integrations are risky, and likely to fail altogether. The agile paradigm of continuously making small improvements applies to migrations as well. You want to be able to integrate and test small changes as soon as possible. But how can you possibly do that if 90% of your application does not even build? There’s only one possible answer: Make it build! Do what it takes to make your application build. Stub out the bits that do not build, temporarily remove files or entire modules from your make files if they are too large to just stub out, use the preprocessor to remove blocks of lines from the build, just make it build. But do this in a controlled, reproducible and reversible way. You want to be able to tell afterwards exactly which bits you have stubbed out or temporarily removed, because they will all need to go back in again after they have been migrated.

Actually, maybe not all of them – chances are you have a fair amount of dead code in your codebase. A migration is a good opportunity to uncover, and potentially remove, that dead code. For example, if you are using preprocessor defines to stub out things, don’t call them ‘TODO’, call them ‘I_AM_STUBBING_THIS_OUT_BUT_IT_NEEDS_TO_GO_BACK_IN_LATER’. Why? Well, chances are that your code is already littered with TODOs while you will not find any of the latter in your code. It is ok if that makes your code look ugly, once you are done, these will all have disappeared again anyway.

Depending on the architecture and code design of your application, once you have eventually gotten things to build cleanly, you have nothing left but a ‘main()’ function, and possibly even that one will contain stubbed out code. That is fine at this point, you are going to bring things back in quickly.

Step 3: Start migrating the central part of the application So at this point, you may be tempted once more to think that now is a good time to distribute all the work across the team, asking everybody to integrate as soon as possible what they have finished. Still, hold on and think again. And let us assume as an example that you are migrating a CAD application that allows you to create technical drawings. It would not be very helpful at this point to work on the ‘Preferences’ dialog that lets the user configure how the drawing is displayed, because you would not even be able to create a drawing (or, preferably, load a well-defined demo file) that you could test the ‘Preferences’ dialog on. Instead, identify a few key features of your application that are fundamental to everything else (in a file- and document-based application like our CAD example, that could be opening, saving, and to some degree modifying the document). Identify the modules that you absolutely need for these tasks, and work towards completing those first. Once you are there, chances are that you have a lot fewer dependencies for the rest and can farm out work much better in parallel.

Step 4: Track the status of your migration So you are happily working away on your migration, completing a few modules every week, and integrating and testing them as you go. And one beautiful morning, your boss comes in and asks you whether you are on schedule, and whether the marketing department can start beating the drum about the shiny new version that now also runs on OSX. You have a good feeling about your project, and say so to your boss, but your boss insists that you quantify that feeling, because she has been around for way too long and knows all about over-optimistic software engineers. Now what do you do?

We have been in this situation ourselves, and because providing detailed, realistic numbers is something that instills confidence in our migration customers, we have developed tools and algorithms for status tracking. These tools can automatically harvest the codebase and generate status report spreadsheets, including nice burndown charts. How to actually execute the tracking depends a lot on the codebase in question, though. But whichever approach you finally use, we can only strongly advise you to come up with a way to track your progress. The more you automate generating these progress reports, the more likely it is that you will actually create them regularly. A word of warning, though: It is easy to forget about the long tail, your development speed is going to slow down towards the end of the project, as parallelization is less possible, and only bugfixing tasks remain.

Step 5: A test plan is a good thing to have, you really should have one Whether you perform a migration yourself or you choose to outsource it, a test plan is an asset of tremendous value. It does not even need to be automated, but could simply be a document with steps and expected results. In fact, as automated tests are extremely likely to break during the migration anyway, you might even need an entirely different GUI testing tool.

Your test plan will allow you to check your migration for feature completeness (assuming it is complete, of course!), give you yet another way of tracking progress, and also tell you when you are done. Can you perform all tests in your test plan with the expected outcome? And are you confident that your test plan is complete? Well then, congratulations, you have completed your migration! Now have a look at code that is still stubbed out. Its presence indicates either that your test plan is not as complete as you thought it was (in which case you are not done after all), or that that particular code was not in use any more. Double-check, ask for a review, and then go ahead and retire it.

Even if you do not have a test plan, you might have something that you can turn into one. For example, you may have a user’s manual. Can you perform everything the user’s manual describes with your new version? And while you are checking that, write down the steps that you take to verify and voilà, there’s your new test plan. (Now, in the next step, turn this into an automated test suite.)

Step 6: Automate the engineer, not the engineering! Over here at KDAB, we often get asked whether we could not make and sell a tool that does automated migrations. And of course, we have had that idea as well, many times over in fact. Alas, that idea is doomed to fail. No matter how sophisticated your tool, it will never be good enough. Automating 60% of your migration would already be a tremendous achievement, but the tool, no matter how good it is, is going to get some things wrong, so you are going to spend a lot of time fixing those bugs. And then you still have the other 40% left. In the end, you will find that the time to write the tool, plus the time to clean up the mess the tool left behind, plus the work on the bits that the tool could not migrate, is going to be more effort than just performing the migration by hand, by well-trained engineers with an actual brain.

But of course, there is room for software tools in a migration. For example, while we strongly believe that the actual code should be migrated by real engineers line by line, for the reasons described above, these real engineers should get all the tooling help they can. For that reason, we at KDAB have developed countless internal migration tools over the years that allow our engineers to be as productive as possible. For example, in a migration from Motif to Qt, code like the following would likely occur rather often:

XmToggleButtonGadgetSetState(_onoff, sc && sc->isEnabled(), false);

and this would get migrated to

ui->_onoff->setChecked(sc && sc->isEnabled());

Of course, it is not necessary (and not even desirable, since it is error-prone) for the engineer to type in all the input parameters one more time. This is where a well-configured editor can be of tremendous help. But it is still the engineer who identifies the situation, triggers the code transformation, exactly once, in this particular location, and then reviews the outcome.

Another example of tooling help is switching between versions, the old (unmigrated) one and the new one (under migration). We usually keep the two code trees next to each other, and with our tooling, the engineer can do things like quickly jump between the two versions of the same file in order to check “what did this look like in the old version again?”, open another buffer or another window with the other version, and so on. Quite obviously, heavy grepping, the use of text-based search tools using complex regular expressions is also an indispensable part of a migration engineer’s toolbox.

Step 7: Avoid feature creep A question we often get from customers that we perform migrations for is, “So, while you have your dirty hands deep in our code anyway, can’t you also add feature X/refactor our code/make this look like an iPhone”? And of course we could!

But we would rather not. A migration, replacing one UI framework with another, and/or bringing an application from one to multiple operating systems, is a major undertaking. It’s like a heart transplant. No surgeon would fix a broken bone at the same time as the heart transplant. It’s just too risky, you just might end up unable to reassemble the parts. Yes, do all of the above, but do them one at a time, with complete and thorough tests in between.

The overall elapsed time will be shorter, and the total risk will be a lot less. I know that a migration with no new features, not even a refactoring, can be a hard sell to management, after all, you are spending a lot of money, but there are no visible improvements. Still, you should fight that fight to safeguard the future. Explain that you have accumulated technical debt that needs to be paid off now, or you’ll have to pay off more of it because of the accumulated interest later.

This is because it will become harder to make changes of any kind the more you allow your technical debt to accumulate. Eventually it will even be impossible to add new features without breaking something else, since from experience, technical debt increases exponentially with accumulated debt.

Maintenance will also take a hit, since it will become close to impossible, but at least economically infeasible to just fix defects in the software. This also means that your future time-to-market for new versions, be it feature releases or just bug fix releases will take longer time to implement, creating delays, extra stress and reduced customer satisfaction.

Managers tend to understand that kind of language. And, after all, if you are going to Qt from a single-platform toolkit such as MFC (just Windows) or Motif (just Unix/Linux), there is one big new feature you are creating after all – the availability on multiple platforms.

Another question we often get is whether the customer should clean up or refactor their codebase before handing it over to us for the migration. And surely, the cleaner the codebase, the faster (and thus more cost-efficient) the migration is going to be. But if you do this, do it properly, run all the tests as if you were to release the cleaned-up or refactored version. If you are not able to do that for lack of time or resources, we’d rather work with the uncleaned, but released version, as that will have a defined standard that we can work from. And then we can always refactor after the migration. Again, it is a question of mitigating risk by serializing tasks and doing what you are best at.

Step 8: Decide what is important and what is not When you start your migration, discuss among each other why you do it. This might seem obvious, but I have often heard customers complain that a slider did not work the way it used to in the old toolkit, and requested that the slider should be made to work in exactly the same way. Such a complaint is moot if the reason for doing the migration in the first place is to make the application feel more native on the target platform.

Quantify your options

Remember the four options for moving ahead that we discussed earlier? If you have not done so already, now would be a really good time to start quantifying them.

Granted, estimating the cost of a rewrite is far from trivial, but you are still in a much better position than if you had to come up with an estimate of what it would cost to write the application from scratch – after all, you have a great specification at hand – namely: “it should work like the old one”. To build the estimate, you need to figure out the fire power your developers possess. In other words how long it takes them to complete a feature (including writing it, debugging it, documenting it, and possibly even writing unit tests for it). Your source code repository may help you with answers to this.

For estimating what the migration itself would cost you, ask key employees to migrate smaller parts of your application. While doing so, measure their speed, and take ample notes of what is required.

Can anybody do a code migration?

In general, yes. As long as you have sufficient skills in the environment you are migrating to, you should be able to perform a migration. Another interesting question, however, is how efficient you are going to be. In order to find out more about this, we have analyzed many migration projects that we have completed, and looked at the skills available, particularly in those projects where the migration work was shared between different parties, i.e. the customer, KDAB, and one or more other service vendors. What we found was that there are three large groups of skills that mattered, in increasing order of importance:

  • Knowing about the source framework or toolkit. So if you are migrating from MFC to Qt, knowing about MFC. Of course, if you can immediately say what a bit of code is doing without having to look it up in a reference manual every time, you are going to be faster. But it also turned out that this was what mattered least. No matter how well designed, code in large-scale applications tends to be repetitive. After you have looked up what a certain method invocation does in a toolkit you don’t know, you will remember, and you will also remember what construct you migrated it to. And if you are doing well, you have added automation support to your editor configuration (see above).
  • Knowing about the target framework or toolkit; Qt in our case. Understandably, this is a skill you cannot do without, or you will not even be able to produce good code in the target framework, you might not even be able to recreate all functionality. Better skills mean better productivity, so the more Qt you know, the faster you are going to migrate. And yet, this was not the most important skill, as we found out.
  • Knowing about migration strategies, tools, and techniques. This turned out to be the key factor in achieving great productivity when performing a migration. In fact, it turned out to be a lot more important than knowing the target framework. Also, the productivity difference between somebody who had a lot of migration experience and somebody who did not was greater than the productivity difference between somebody who knew Qt well and somebody who did not.

So you need to know how to approach a migration, develop many useful small-scale automation tools in your editor and pick the right order. We hope we have given you some tools and strategies on how to go about a migration in this article. As far as the details go, that can really only be learned by experience. The more migrations you do, the faster you are going to get.

Getting help

We hope that you have a much better idea about how to plan, prepare, and execute a migration to Qt now, and wish you good luck with your project!

If you have experiences to share, we’d love to hear about them right here in the comments, or by email. And if you want the KDAB Experts to help you with your migration, do not hesitate to contact us!

If you are interested in seeing some of our code migration tools in action, you may wish to check out our porting video in which KDAB’s Jesper Pedersen presents our tools for migrating from Photon to Qt. Approx. 90% of what is presented there tends to apply even if your source toolkit is MFC, Motif or something different. More in-depth information about the nature of migrations starts at 16:35, and The KDAB Approach to migrations at 34:20.

You can also read more about our software development and training services for Qt, C++ and OpenGL on

Leave a Reply

Your email address will not be published. Required fields are marked *