Learning Library

← Back to Library

Generative AI for Risk-Free Application Modernization

Key Points

  • Developers face intense pressure to deliver faster with less resources, and a single mistake can cause system‑wide failures, prompting interest in generative AI to modernize code safely.
  • IBM’s “AI in Action” series will examine what generative AI can realistically achieve, how to build it responsibly, and which business problems it can solve.
  • Guests Miha Kralj and David Levy explain that generative AI reshapes the application development lifecycle by enabling developers to work with prompts and instructions rather than writing every line of code manually.
  • The shift is likened to modern car factories where robots handle production and humans design the robots, meaning today’s software engineers become “prompt engineers” that guide AI‑generated code.
  • While generative AI can produce new code quickly, human developers must still handle essential tasks such as documentation, testing, and validation to ensure quality and mitigate risk.

Sections

Full Transcript

# Generative AI for Risk-Free Application Modernization **Source:** [https://www.youtube.com/watch?v=b0brvO-rrMI](https://www.youtube.com/watch?v=b0brvO-rrMI) **Duration:** 00:21:34 ## Summary - Developers face intense pressure to deliver faster with less resources, and a single mistake can cause system‑wide failures, prompting interest in generative AI to modernize code safely. - IBM’s “AI in Action” series will examine what generative AI can realistically achieve, how to build it responsibly, and which business problems it can solve. - Guests Miha Kralj and David Levy explain that generative AI reshapes the application development lifecycle by enabling developers to work with prompts and instructions rather than writing every line of code manually. - The shift is likened to modern car factories where robots handle production and humans design the robots, meaning today’s software engineers become “prompt engineers” that guide AI‑generated code. - While generative AI can produce new code quickly, human developers must still handle essential tasks such as documentation, testing, and validation to ensure quality and mitigate risk. ## Sections - [00:00:00](https://www.youtube.com/watch?v=b0brvO-rrMI&t=0s) **Generative AI for Safe Modernization** - The hosts discuss how IBM’s generative AI tools can accelerate application modernization while minimizing the risk of breaking existing systems. - [00:03:07](https://www.youtube.com/watch?v=b0brvO-rrMI&t=187s) **Generative AI: Code Assistance, Not Replacement** - The speaker describes how developers leverage generative AI to handle tedious tasks such as adding comments, renaming variables, and refactoring code, while emphasizing that creative problem‑solving still relies on human expertise. - [00:06:14](https://www.youtube.com/watch?v=b0brvO-rrMI&t=374s) **Integrating AI Plugins into VS Code** - Discussion of how developers embed various plugins—including AI tools—into Visual Studio Code, the blurred line between separate and integrated tools, and concerns about potential pitfalls when using AI for code optimization. - [00:09:24](https://www.youtube.com/watch?v=b0brvO-rrMI&t=564s) **LLM‑Generated Code Complexity Problem** - The speaker cautions that while large language models can produce technically correct code, they often output overly clever, low‑level constructs (like ternary operators and lambdas) that are hard for humans to read and maintain, stressing the importance of clear, collaborative code. - [00:12:30](https://www.youtube.com/watch?v=b0brvO-rrMI&t=750s) **Governance Challenges in Generative Code Repeatability** - The speaker explains that generative AI creates different code versions from the same prompt, preventing exact reproducibility and raising governance issues that require new processes to ensure repeatable, auditable development. - [00:15:43](https://www.youtube.com/watch?v=b0brvO-rrMI&t=943s) **Learning Code with Generative AI** - The speaker emphasizes that true programming mastery requires understanding AI‑generated code, recommending a step‑wise workflow—explain, fix, then test—to deepen comprehension while leveraging generative tools. - [00:18:52](https://www.youtube.com/watch?v=b0brvO-rrMI&t=1132s) **Balancing AI Code Tools & Risk** - The speakers discuss the difficulty of selecting AI models for code work—highlighting their strengths in verification versus generation—and the legal, privacy, and training‑data concerns that make organizations cautious about deploying LLMs for mission‑critical software. ## Full Transcript
0:00Everyone is under pressure to produce faster and to do more with less, 0:04which often means that programmers are behind the scenes 0:07reprograming millions of lines of code. 0:09And no biggie, but if they get it wrong, the whole house of cards 0:13might just collapse. 0:14So how can you use generative AI to modernize your applications 0:18and make sure you don't break anything in the process? 0:21Let's see today on AI in action. 0:24In this series, we're going to explore what generative AI can 0:27and can't do, how it actually gets built. 0:30Responsible ways to put it into practice and the real business problems 0:34and solutions we’ll encounter along the way. 0:36So welcome to AI in action, brought to you by IBM. 0:40I'm Albert Lawrence. 0:41Now today we're going to talk about how generative AI is transforming 0:45application development and making sure that what you spent 0:49years building doesn't break when you try to upgrade it. 0:52Today I'm joined by Miha Kralj and David Levy. 0:56Now, Miha is a senior partner of Cloud Build and Modernization 0:59at IBM and self-defined software engineering nerd. 1:04Welcome, Miha. 1:05Hey, Albert. How are you today? I'm so good. 1:07I'm excited to get into this. 1:08And our other guest is David 1:10Lev, an advisory technical engineer for IBM Client Engineering. 1:14He builds things, y'all. 1:15So why did I choose you to for today's episode? 1:18Well, because you both geek out about the build. 1:22And today I want to explore how to modernize your applications 1:24faster, at lower cost, and with less risk using generative AI. 1:29Plus, the different ways developers can use it to make life easier. . 1:33So we're going to get into all of that. 1:34Let's start off with our first question though. 1:36I want to direct this one to you, Miha. 1:38How does generative AI change the application development cycle? 1:42And for the folks who haven't dipped their toe in yet, what are they missing? 1:45Well, I typically compare how job of a software 1:49developer is changing the same way how the job of car manufacturing changed. 1:55If you remember, in 50s, car plants were typical conveyor belts 1:59and blue collar workers were welding on it and painting and putting glass 2:04in, and at the end, to have a human to do the inspection of the car. 2:07If you're looking at any modern car plant today, they're all robots, right? 2:11Robots are welding, robots are painting, robots are putting glass in there, 2:14and robots are doing the final inspection before car is sold off. 2:18Now the question is, where are humans now? 2:20Well, humans are making robots that make cars, and that's 2:25very similar to what's happening to a software development profession. 2:29Instead of directly making code, we are going to make instructions, 2:34prompts to make code in the future. 2:37The second part is we actually sees that you can use generative AI in the software 2:42development lifecycle, either for good things or for silly things. 2:47Let me explain. 2:48Generative AI can generate new code, but then if we let the generative 2:54AI to do that, somebody else 2:56probably needs to write the documentation, write the tests, do all of the things. 3:00And I saw in early stages that in a worst case 3:04scenario, developers just became butlers for generative AI. 3:07There were literally copy paste in one direction. 3:10Generative AI takes that as a prompt, spits out some code. 3:14You copy paste code back, you try to compile, doesn’t work. 3:18That’s not how we do it these days at all. 3:22In general, all of the tasks that developers 3:26have to do in a traditional life cycle because of, 3:29engineering excellence, coding standards and discipline, typically 3:33we prefer to offload those like let's call them the toil type of the tasks. 3:39Definitely, for example, a developer these days is just going to write 3:43the whole class or function or routine or whatever the language is, 3:48typically without massive amount of comments in there, 3:51and then ask generative AI to comment out the code, to actually 3:54explain in human language what the flow of the code is. 3:58Very, very frequently, we actually see that the lazy developer is going to use 4:03a single letter variable names, you know, not just iterators, but in general. 4:07Inside your class you’re going to just say, I don’t know C for a customer. 4:13And then at the end you say to generative AI, please rename all of that stuff, 4:16and it’s just going to go 4:18and put intelligent names to the variables, to the fields and so on. 4:22These are just a couple of examples where generative AI can be 4:25used as a very useful power tool, 4:29but is not replacing the creativity 4:33that we still expect from human developers to shine through. 4:37The creativity thing is huge. 4:39Like, you're not going to get the generative 4:41AI to be creative about a solution, but the documentation like that 4:44is, that's it for me because the documentation was tedious. 4:48And if you want to make something really well documented 4:51and you're sharing your code amongst a team, 4:52or you're making an asset for other people to use, and you have a complex pipeline. 4:56And when you look at the code, 4:58you're seeing a function call with a bunch of parameters. 5:01With modern IDEs, vim even, like you could just hover over it 5:05and get a detailed explanation of what that function does 5:08by writing good documentation. 5:10And that's what I use almost exclusively. 5:13Generative AI is unbelievable for that. 5:15It's such a boon for developers. 5:16So then with with all of this and with that tool truly being so useful. 5:21But now I'm curious about 5:22how do you actually use generative AI to transform an application. 5:27So there are languages that are performant but old like COBOL, right. 5:31And to modernize an application that uses COBOL and mainframe, 5:36you could have a plug in in your VS code where you're able to have two panes, one 5:40with the COBOL and one with, let's say they're converting it to Java, 5:43and you're able to do the whole suite of like transformation 5:46and testing and LPAR and all of this stuff. 5:50And it's just like Miha was saying, the human is still in the loop. 5:53You cannot you're not asking it to do it all on its own. 5:56You're doing it and you're testing it and you're making sure it's working 5:58and you're understanding what the COBOL programing language is doing, 6:02and then you're able to convert it to something in Java. 6:03And you just have like a first step in the transformation. 6:06And that is a huge leap forward from just trying to write it on your own. 6:10Well then 6:10is this a separate set of tooling then that's required in order to do this? 6:14Or does it integrate into somebody's current existing environment? 6:18You can integrate it into VS code. 6:19Most programmers are running some kind of IDE, 6:22and VS code is probably the most popular. 6:24And they're able to integrate this kind of tooling into VS code. 6:26So the way how toolchains work for developers, it's extremely 6:30hard to say what's a separate tool and what's integrated tool. 6:33So for all developers will use, let's say they open 6:36Visual Studio Code as a as their coding environment, 6:40and then they're going to load sometimes up to 20 plugins. 6:45And every plugin is a separate tool, sort of. 6:49You want the plugin that color codes your code. 6:52You want the plugin that is actually linting your code. 6:55So properly aligning it, 6:56you can have a plugin that is looking into your syntactical errors. 7:00Or you can have a plugin for AI. 7:02Okay, okay, I'm getting it now. It's all coming together. 7:04But but with all these different codes with the building, 7:07I'm kind of curious about some potential pitfalls. 7:10I actually want to give you 7:11an interesting example that is not necessarily code conversion. 7:15And it can start, let's say, with Java or with any of the current languages 7:20or JavaScript. 7:21Typically, developers are going to create 7:24a relatively simple flow of code. 7:27The way how brains work and the way how logical parser needs to be captured. 7:31And then typically after it works, then you start teasing 7:36AI and you’re asking, can you make that more performant? 7:40Can you optimize? 7:41Can you make it even more optimized? 7:44And if you do that too much, you actually get a very interesting side results. 7:49For example, I saw a couple of examples where a developer writes 7:53a fully functional class, the object in in object orientation programing. 7:58And when you start to push, a large language model to optimize 8:02one of the changes 8:03that LLM does, it turns the code into functional programing code. 8:07So it drops objects. 8:09Because functional programing is generally more optimized 8:13for speeds than the object oriented coding is. 8:17And when you keep pushing it further, suddenly it starts 8:20to do everything in async calls and it's just throwing awaits everywhere. 8:25So at the end, what I'm trying to explain is that you get a code that flies, 8:30but regular human brain 8:33can't even debug that properly because you now need to start to trace. 8:37If you have a paralyzed pass 8:39that is extremely hard for a regular developer 8:41to do the async programing, debugging, and especially in 8:45a functional programing paradigm, suddenly all of those things come together. 8:50And yes, it works, but it's no longer for human consumption. 8:53It looks like complete cryptic hieroglyphs that maybe 8:58you need to put it to another LLM and say rewrite for humans to understand. 9:03It's really interesting, like the I guess if you prompted to say to keep it 9:06class, keep it object oriented, 9:08and you could try to make it more performant that way. 9:10But I, I have seen that in action. 9:12Like exactly what you're saying, 9:13where it just starts converting a object oriented program 9:16that I'm writing into the async/await, not async/await, 9:19but like await chains like this callback insanity 9:23that I had to be like, all right, you know what? 9:24Let me just start over, write it myself, and then ask it to document it for me. 9:28The other example would be over going. 9:31It's not that it goes wrong, it just goes weird for regular humans to understand. 9:36To try to give you the analogy here, would it be like when you want to optimize 9:40some nice writing, some nice narrative in English, 9:44and you would get a complete legalese out that actually means the same, 9:48but you just can't choose for that specs. 9:52A good example would be where most of the LLMs. 9:55when you ask them to optimize regular if/then chain statements, 10:00it turns into ternary operators, which most languages support 10:04ternary operators, but they are really hard to comprehend 10:08gor a regular human, it just the syntax is not trivial, 10:13and once you start adding the whole lambda anonymous functions in it. 10:16So very much what I'm trying to say here is that LLM can almost start to write 10:21assembler level weird constructs, which yes, 10:26they will work, but they will make code unmaintainable uncomprehensible and 10:32it is not a good code to then actually commit. 10:35Code needs to be understandable for generations. 10:39It's not just something that one nerd puts together and then, 10:43you know, everybody’s going to admire that piece. 10:47Masterpiece, monument on GitHub or GitLab or wherever it's published. 10:52The code is extremely human collaborative thing. 10:56And if you use somebody that is smarter, not really, but makes code 11:01that looks smarter, that's actually a very, very bad anti-pattern. 11:05Well, 11:05then how do you keep the human in the loop then when you are coding with Gen AI? 11:09Is it possible? 11:10It is with a very long prompt, right? Okay. 11:14But even like like avoiding anti-patterns, avoiding relying on generative 11:19AI to write your code for you. And I have, 11:22you know, nephews who are interested in computer science or programing. 11:25They're young and with ChatGPT, 11:28they're like, oh, I'm just going to have this write 11:30this Lewis script for me in Roblox or whatever it is. 11:32And they're not understanding the principles of programing. 11:35Like there's like, you know, they didn't read the Art of computer program. 11:37They don't understand the principles of what they're doing. 11:39And so they're relying on, 11:41like Miha was saying, this really hard to understand, like complex 11:46solutions to problems that work well, but they're not understandable. 11:49And even learning from Gen AI in that way is pretty terrible. 11:53So you have to really go into it understanding what you're trying to do 11:56and then using Gen AI to augment your workflow, augment 12:00the stuff that you're writing documented in my case, and stuff like that, that 12:03that's where it really shines. 12:04And that's, I think its most profound benefit. 12:07It seems like the power of really learning and understanding is key here. 12:12It's about more than just being able to 12:14to make a thing or know a thing or quote a thing. 12:16It's about that understanding. 12:17Understandability is definitely the key. 12:19Yes. Well, look, here's something else I'm wondering about in terms of key. 12:23When I think about governance, when I think about monitoring, did 12:26those things have any sort of a place here in this conversation? Yes. 12:30And we didn't solve them all yet. 12:33And lots of a discussion how to do that. 12:36One of the projects that we are doing recently is repeatability. 12:40So we always expect in computer science industry that when we do 12:45something that it can be very much repeated again and again and again. 12:49And that's why very much infrastructures, code exist. 12:51And, you know, we commit all of those things into source control, the way how 12:56we actually code with generative AI, because it's generative, 13:00it makes things and it makes them slightly different each and every time. 13:04So when you give it a spec, let's say I want a class that is going to do blah 13:09and it will generate code that might be perfectly fine and it works. 13:13But every developer that I know is going to ask for another generation 13:18and another one and another one, and then it's going to stare at 4 13:21or 5 of those different variations that came out. 13:24And all of them are slightly different or sometimes significantly different. 13:28And then human just kind of arbitrarily goes, oh, I like this one most, 13:32which is all great, right? 13:33Then the code goes into a commit and everything is fine. 13:36Here is the core problem when it comes to governance. 13:38How do you repeat that process? 13:41Because the next person, even if you feed it exactly 13:45the same prompt, was going to get another five different variations, 13:50and none of them is the one that is currently in our source control. 13:54So if somebody wants to rewalk 13:57the same steps again, it can't. 14:01We need to find a new way of governance to say, 14:04well when you asked it blah and it generated material 14:08based on your prompt, we need to start tracking these things. 14:13So we need to start tracking the temperature number or something 14:17that makes the process repeatable. 14:21Otherwise at the moment it just goes one direction. 14:24At the end you get the code, but you cannot repeat the process 14:29the same way again. That's really interesting. 14:31And it's also like a little bit philosophical, right? 14:33Like you're saying that it can't repeat exactly the same solution, 14:37but it gives you a solution. 14:39So you're trying to you're asking Gen AI 14:40to give you a solution, it succeeds in that. 14:42But it's not exactly the same. 14:44What's the benefit 14:45of having identical responses to the process, like understanding 14:48that there are many solutions in all computer science 14:51to figure out a solution for a problem? 14:52What's the benefit of having the same solution over and over again, though? 14:55Let's say that you want to add one feature into the code. 15:00Are you going to? 15:01As a typical modern processes go, you want to prevent the drift. 15:05So you are removing the current branch of the things. 15:09And you want to start with a new additional requirement. 15:12But you start from the beginning and you want to regenerate everything. 15:15You just can’t, right? 15:17Because you are not going to get the same thing out. 15:20When we're thinking about that, even with, 15:21you know, the difficulties with governance there, 15:23how can somebody get started with using Gen 15:26AI tools in the coding or in the modernization process then? 15:30If we're talking technical and we're talking computer science, 15:33data science, stuff like this, having a background, at least and a foundational 15:39understanding of what you're doing before you start using these tools, 15:42I think is necessary. 15:43Otherwise it's not a good way to learn. 15:45If you're given the answer to something that you don't understand 15:46how they got to like the process actually is really interesting. 15:48How do you how did you get to this? 15:51If you’re just taking a block of code that’s generated by, you know, an AI 15:55and then you’re plugging in 15:56and it works, but you don’t understand why, that is an issue in learning. 16:00And so having a fundamental understanding 16:02of programing in this use case is necessary. 16:04But to get started using it I mean easy peasy. 16:08It’s you could find you can find tools anywhere and good tools almost everywhere. 16:12You just have to learn how how to use it. 16:15And it’s simple. 16:16Just go into it, start doing it. 16:19It's almost a maturity scale. 16:21So I would say that once you install the generative AI tool of choice 16:26that is using the back end model of choice and, you know, different, 16:31models and different vendors are doing different things. 16:34We can talk about it later, but the usual step wise, 16:38the first thing that typically developers should start doing is doing a very simple 16:44slash explain, which is ask a generative AI what is this code? 16:49So through that, the generative AI typical is going to write 16:53in English language how it interprets whatever code you are pointing to. 16:59The next step is typically slash fix, which is 17:03find the error and fix that error. 17:06Third one would be slash test, which kind of, 17:09generates tests for me and make sure 17:12that all of the boundary conditions are properly validated and tested. 17:16And then the last one is of course generate new code. 17:19So if you're going through that, it's kind of almost the easiest one 17:23is just to ask, almost like some very senior developer. 17:27Can you tell me what this does? 17:29That is very much the easy first step for developers to start. 17:33So how do you even decide between the generative AI options that are out there? 17:39You actually try multiple models and that's one of that level of complexity 17:44and lack of governance that we are trying to address these days. 17:47If you're looking, for example, how developers and we interviewed a whole 17:50bunch of that, there is a joint project between IBM and Red Hat right now 17:54where we are really intently looking into how that is, how new software is created. 17:58With these tools, they're typically going to create the prompt 18:02and throw to 2 or 3 different LLMs and see results coming back. 18:06Based on that, change the prompt and maybe 18:09make a different selection of LLMs and then change the prompt again. 18:13And then maybe they're going to say to one LLM, can you test the output 18:17from the other LLM and then create the chain of those agents. 18:21And then at the end, very much using them like a power tools 18:25and agentic approach to, generative AI, which means that they become agents, 18:30that they start to talk to each other, one generates code, the next one does 18:34the semantic parsing. 18:36The third one does the static code analysis, and the last one, for example, 18:39does the full end to end blackbox testing. 18:42These are all different, 18:43potentially different LLM engines, different LLM models, each 18:48one specialized in being based in one of those specific tasks. 18:52We have models that are not great at all in generating new code, but they're 18:56absolutely amazing in verifying and testing existing code, for example. 18:59How to choose which one, at the moment it is still a little bit of, 19:03you know, dark arts. 19:05And that separates a good, advanced AI software developers from beginners. 19:11Okay. 19:11Well, look, we've been kind of talking about things, an ideal scenario. 19:15Right. 19:15So we've been kind of imagining that if you're that your organization 19:18has all the resources in the world and can test all these different 19:21tools on out and find the perfect one, that's for you. 19:23But what if your organization just doesn't have the tools that you need, 19:28or that you want to use? 19:29Are you just out of luck? 19:30It's a very typical approach. 19:32Until we understand it, we are not going to allow it. 19:35but there are two separate problems here that we need to address. 19:39Why are companies so, reserved and risk averse 19:43when it comes to using LLMs for, let's say, some, 19:47you know, important code, let's say mission critical system code. 19:50And so the first problem is, of course, as you mentioned, giving that code to LLM 19:54until there is a very thorough legal review, if you push code 19:59to some SaaS service or API endpoints that is outside of your organization, 20:05what happens with that data that you gave out from there? 20:09So the first problem is how are they going to treat the data. 20:13But the second problem is what was the model trained on. 20:17And that’s why those two things are super important. 20:20Where it runs, it could run inside the walls of a data center 20:24or in a trusted environment that has a very strict and very well 20:29defined legal limitations. 20:30But the other part, which is, is the model open 20:34sourced and published and fully vetted by broader 20:38audience and scientific community, or is it commercial, closed 20:42source, dark, arcane art and you should just trust it, right? 20:47So in closing for today, here's what I'm thinking. 20:50I've got a few takeaways here. 20:52Generative AI won't replace you as a coder, 20:55but it will seriously help you code. 20:58Code should be built to last for generations, 21:01not just for a moment in time. 21:04And having a fundamental understanding of code 21:06is still critical to code using generative AI. 21:09So basically, just keep playing 21:11with all of your approved options and emphasis on the approval. 21:15All right. 21:15So Miha, David, thank you so much for both of you for being here. 21:20This has been a fantastic conversation. 21:23That’s it for this episode. 21:24But everybody please keep on listening. 21:27Don't worry. 21:28There are a ton more good bites, good nuggets where these came from. 21:32And we'll see you again here soon. All right, friends.