m.panchenko, Author at Kevuru Games https://kevurugames.com/blog/author/m-panchenko/ Fri, 20 Mar 2026 12:39:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://kevurugames.com/wp-content/uploads/2025/01/apple-touch-icon-44x44.png m.panchenko, Author at Kevuru Games https://kevurugames.com/blog/author/m-panchenko/ 32 32 Photorealism vs Stylization: How 3D Art Outsourcing Studios Adapt to Trends https://kevurugames.com/blog/photorealism-vs-stylization-how-3d-art-outsourcing-studios-adapt-to-trends/ Tue, 17 Mar 2026 12:08:10 +0000 https://kevurugames.com/?p=26864 If you look at the list of trends in game art, photorealism has been there for years. And so have various other styles. Every time the technology brings 3D art closer to reality, it seems like realistic style is going to take over, strapping gaming world of art diversity. Sounds like an old story? The […]

The post Photorealism vs Stylization: How 3D Art Outsourcing Studios Adapt to Trends appeared first on Kevuru Games.

]]>
If you look at the list of trends in game art, photorealism has been there for years. And so have various other styles. Every time the technology brings 3D art closer to reality, it seems like realistic style is going to take over, strapping gaming world of art diversity. Sounds like an old story?

The first time it was heard was in 19 century, when photography was invented and some artists started panicking as their craft seemed to be endangered. Almost two hundred years later, we can safely state that the art didn’t disappear – it has evolved in many beautiful ways, partly because of the photography challenge, and even old-school oil portraitists still have jobs. And our guess is that a similar thing is happening in the world of video games.

Let’s replay this story one more time and bring evidence and numbers to prove the point for once: to navigate trends, you don’t need to know what is trendy. You need to understand general rules behind trends changes. Let’s try to get there, starting from the basics.

Photorealism: When the Real World Becomes the Starting Point

Stalker 3D character art AAA game

Art from Kevuru Games portfolio

In modern AAA production, photorealism rarely begins with sculpting anymore. Quite often, it starts outside the studio. Artists go out and photograph real materials – rocks, asphalt, tree bark, damaged walls, bits of concrete. The number of photos taken can reach a hundred. These images get processed with photogrammetry software that creates a 3D model that is as close to reality as a photo.

Looks like a great technology, right? The funny part is that the result looks impressive but is almost useless for the game at first.

Scans come out heavy, chaotic, and full of problems. The topology is messy. The mesh is far too dense. Texture data needs cleaning. So the real work starts after the scan: rebuilding topology, simplifying geometry, adjusting materials so they behave properly in the engine.

A lot of environment work in The Last of Us Part II followed this approach. The surfaces feel believable partly because many of them originate from real-world reference. But what players see on screen is the result of refinement, not just capture.

Image credit: The Last of Us II

Something similar happens in Red Dead Redemption 2. The world feels grounded because materials behave consistently. Wood absorbs light differently from metal. Dirt reacts differently from stone. That consistency matters more than sheer polygon density.

For outsourcing studios, working in a photorealistic pipeline often means stepping into an existing system. Assets must match the material logic already used in the project. Lighting, scale, and detail levels have to remain consistent with the rest of the environment.

It’s less about creating a single impressive model and more about fitting hundreds of assets into the same visual reality.

And that’s where photorealism becomes demanding. Not because the models are complicated – but because everything has to follow the same rules.

Stylization: Design First, Detail Second

Obviously enough, stylization is the opposite of photorealism. Closeness to the real world is not a strength here. It’s all about following an original style, and create a new reality based on this style. That often means fewer polygons, but not less thought.

Take Fortnite. The characters are exaggerated, materials are simplified, and surfaces rarely aim for physical accuracy. But that’s what people want from their skins – not look like real people, but look strange and fantastic. Here are some skins we made for Fortnite: Bushranger is a totally unexisting anywhere in the worlds thing, and that’s why it got so popular among the players.

Art from Kevuru Games portfolio

Another example is Deep Rock Galactic. The game’s environments rely on bold shapes and strong color contrast rather than dense geometry. Even in chaotic cooperative combat, players can quickly identify terrain, enemies, and objectives. The art direction supports gameplay readability rather than competing with it.

Stylization also affects the production pipeline. Instead of scanning materials or chasing photographic accuracy, artists spend more time defining rules for the visual language of the game.

Outsourcing studios have to focus on such priorities:

• strong silhouettes
• controlled color palettes
• readable materials
• simplified geometry that still conveys weight and structure

For outsourcing studios, stylized projects often require a different type of discipline. The challenge is not matching real-world references but staying consistent with the project’s visual logic. A single prop that breaks the style – too realistic, too noisy, too detailed – can stand out immediately.

In that sense, stylization can actually be less forgiving than realism.

There are fewer details to hide mistakes. Everything depends on proportion, clarity, and cohesion.

Art from Kevuru Games portfolio

On the list of the most popular games in the world, the division between two types of art looks like this: photorealistic games make around 35-40% of all, while stylized – 60-65%. The same tendency has been continuing for many years, the balance staying at the same level.

You can see that the photorealistic ones typically belong to the biggest studios: Red Dead Redemption 2 by Rockstar Games, Call of Duty by Activision, EA Sports FC by Electronic Arts, and so on.

This is what people expect from AAA studios: using last tech advances and huge budgets to create most immersive experiences for gamers. These games earn a lot at the release, but not necessarily last for decades (although many of them do, like GTA V, for example).

The games that tend to engage players for many years, are often stylized ones, with basic visuals that don’t really require latest technology and high performance devices (for instance, Fortnite, Minecraft, and so on). They still make lots of money, but the work of studios to make them profitable over time focuses on different objectives, such as creating additional assets (skins, limited collection of accessories), or making small additions.

The expectations from indie studios are the opposite: they tend to release titles with stylized art that looks original and instantly recognizable. It may look simple, but the work invested is huge. The same goes for hybrid style, the one that combines elements of both photorealism and stylization. Here are the reasons.

Why Mixing Styles Is Harder Than It Looks

On paper, combining photorealism and stylization sounds like a good idea. Realistic environments with stylized characters, or the other way around – it feels like a way to get the best of both worlds.

In practice, it’s one of the easiest ways to break visual cohesion. The problem isn’t modelling itself. You can build both types of assets just fine. The issue shows up once everything sits in the same scene.

Materials start behaving differently. Realistic surfaces follow physically based rules – roughness, reflections, light absorption. Stylized materials often ignore or simplify those rules. When both exist side by side, lighting exposes the difference immediately.

Scale perception can drift too. Stylized characters might have exaggerated proportions, while realistic environments follow real-world measurements. Put them together without adjustment, and something starts to feel off – even if the player can’t explain why.

Detail level is another common issue. A highly detailed environment next to simplified characters can make the characters feel out of place. Or the opposite – stylized environments can make realistic assets look too “heavy” or overly complex.

There are games that handle this balance well, like Overwatch. The characters are clearly stylized – exaggerated proportions, simplified forms – but the materials and lighting are grounded enough that nothing feels disconnected. Here is how it looks.

Successful hybrid projects define clear rules for how materials behave, how lighting is handled, and how proportions are balanced. Less successful ones simply combine assets without fully reconciling those differences.

For outsourcing teams, hybrid styles are often more demanding than either pure realism or pure stylization. You’re not just matching one visual language – you’re balancing two, without letting them pull the project apart.

Why Style Choice Is Often a Business Decision

From the outside, the choice between photorealism and stylization looks like an artistic one. In reality, it’s often decided much earlier – and for very practical reasons.

Platform is usually the first constraint. If a game needs to run across a wide range of devices – especially mobile – asset weight becomes a real constraint pretty quickly. It’s not only about frame rate. It’s about how big the build is, how much memory it takes, how stable it feels on weaker hardware.

That’s where stylization tends to work better. You’re not trying to push every texture or mesh to its limit, so things stay more manageable. It gives the team a bit more room to balance performance without constantly fighting the assets.

Art from Kevuru Games portfolio

Then there’s production speed. Live-service games don’t ship once – they update constantly. New skins, environments, seasonal content. In that setup, a photorealistic pipeline becomes expensive to maintain. Every new asset has to match a high level of detail and material accuracy. Stylized pipelines are more flexible. They allow teams to move faster without breaking visual consistency.

Budget plays its role too. Photorealism scales quickly. One highly detailed asset is manageable. Hundreds of them, all needing to match the same level of realism, become a different problem entirely. That’s where outsourcing often comes in – not because internal teams lack skill, but because the volume becomes difficult to handle.

At the same time, some projects choose realism on purpose. If the goal is cinematic immersion or competing with AAA benchmarks, visual fidelity becomes part of the product itself. In those cases, realism is not just an artistic choice – it’s a positioning decision.

So the split usually looks something like this:

Stylization – when you need speed, scalability, and broad platform support

Photorealism – when you need immersion, detail, and visual impact

Outsourcing studios don’t just adapt to style. They adapt to the reasons behind it.

The Role of Technology in Both Directions

Technology influences both photorealistic and stylized production, but not in the way people often expect. New tools don’t automatically push games toward realism. In practice, they just give artists more flexibility.

Take modern game engines. Systems like Nanite in Unreal Engine allow extremely dense geometry to be rendered directly in real time. A few years ago that level of detail would have required aggressive optimization and baking workflows. Now it’s often possible to keep much more of the original mesh.

Art from Kevuru Games portfolio

That obviously benefits realistic environments. But the same technology also helps stylized projects. Faster rendering and real-time lighting make iteration easier, which matters when teams are experimenting with shapes, colors, or atmosphere rather than physical accuracy.

Material tools have gone through a similar shift. Software like Substance Painter and Designer changed how artists work with surfaces. In realistic projects the goal is usually physical consistency – making sure metal reflects correctly, stone behaves like stone, fabric reacts to light the way we expect.

Stylized projects use the same tools differently. Instead of matching real materials, artists often simplify them. Color becomes more important than micro-detail. Surfaces may exaggerate wear or ignore physical accuracy entirely, as long as the style stays coherent.

AI tools are starting to appear in these pipelines as well, mostly in places where artists would normally spend hours repeating the same steps. Texture cleanup, variation generation, small detail passes – the kinds of tasks that are necessary but not particularly creative. AI helps save time while keeping the quality and detalization level high. We have explained how we use AI-assisted pipeline here.

Art from Kevuru Games portfolio

What’s interesting is that none of this technology actually chooses a visual direction.

The same engine can support a highly realistic open world or a deliberately simple stylized one. The tools don’t decide the style. They just remove some of the technical friction around producing it.

Stylized vs photorealistic games: why they don’t compete:

  • Photorealistic games often showcase technology at launch.
  • Stylized games often sustain engagement over many years.

How Outsourcing Studios Build Two Different Pipelines

Photorealistic and stylized projects may both fall under “3D art,” but from a production perspective they behave almost like different disciplines.

Outsourcing studios rarely specialize in only one of them. A single team may work on a realistic military environment for a shooter one month and stylized props for a mobile game the next. Supporting that range requires more than versatile artists – it requires flexible pipelines.

Photorealistic production is usually reference-driven. Artists rely heavily on real-world materials, scanning data, and physically based rendering rules. Consistency becomes the main challenge. If one material reacts to light differently from the rest of the environment, it immediately breaks immersion.

Art from Kevuru Games portfolio

Stylized production follows the opposite logic. Instead of matching reality, artists must match a style guide. Color ranges, proportions, and surface treatment are tightly controlled. The danger here isn’t realism – it’s deviation. One asset that is too detailed or too realistic can disrupt the entire visual language of the game.

For outsourcing teams, that means switching between two very different evaluation criteria.

Photorealistic PipelineStylized Pipeline
real-world reference matchingstyle guide adherence
physically based materialscontrolled color palettes
scan cleanup and reconstructionsilhouette and proportion design
material accuracy under lightingreadability during gameplay

The tools may overlap, as all games are built with Blender, ZBrush, Substance, Unreal, but the artistic decisions behind them change dramatically depending on the project.

Studios that work across both styles learn to treat visual direction almost like a technical specification. Before modelling even begins, artists need to understand which rules define the project: physical realism or stylistic coherence.

Conclusion: Style Is a Constraint, Not a Goal

One thing becomes clear when you look at enough projects: studios rarely start with “we want realism” or “we want stylization.” They start with constraints. Time, budget, platform, team size, how often the game needs to be updated – all of that starts shaping the visuals before anyone even opens a 3D tool. By the time production begins, a lot of the direction is already decided. Style just follows those decisions.

That’s probably why the same debate keeps coming back. Photorealism vs stylization sounds like a creative discussion, but in practice it’s usually a production one. You can see it in how different games succeed.

Minecraft works because its simplicity allows it to scale endlessly.

Fortnite works because its stylization supports constant updates without breaking cohesion.

Art from Kevuru Games portfolio

Red Dead Redemption 2 works because that level of realism is supported by years of coordinated production. It’s not just about detail – it’s about everything lining up, from materials to lighting to animation. Those choices aren’t interchangeable. 

You can see it in the numbers too. Stylized titles stay in the majority (about 60–65%) among new releases as well as most-played lists. Photorealistic projects are still produced by AAA studios, where it’s all a part of the status.

For outsourcing studios, this means the job isn’t to specialize in one visual style. It’s to understand the logic behind it.

A stylized project fails when it loses consistency. A photorealistic project fails when it breaks believability. A hybrid project fails when it tries to follow both sets of rules at once.

The post Photorealism vs Stylization: How 3D Art Outsourcing Studios Adapt to Trends appeared first on Kevuru Games.

]]>
Inside the AI-Assisted Pipeline Behind the BallBuds Kickstarter Key Art https://kevurugames.com/blog/inside-the-ai-assisted-pipeline-behind-the-ballbuds-kickstarter-key-art/ Mon, 16 Mar 2026 16:10:18 +0000 https://kevurugames.com/?p=26848 A Kickstarter pitch is one of the most important pieces of art for a game. While it could seem like an exaggeration, think of it this way: if the art doesn’t look interesting enough for people to donate, the game might not get funds for development at all. So, it’s called key art for a […]

The post Inside the AI-Assisted Pipeline Behind the BallBuds Kickstarter Key Art appeared first on Kevuru Games.

]]>
A Kickstarter pitch is one of the most important pieces of art for a game. While it could seem like an exaggeration, think of it this way: if the art doesn’t look interesting enough for people to donate, the game might not get funds for development at all. So, it’s called key art for a reason.

So, when we were commissioned to work on the Kickstarter project Ball Buds from Blauballs, we knew that it was not just a raw concept art that would be refined later. It had to be great, and it had to be done as quickly as possible.

BallBuds: The Game is a first-person open-world monster-taming adventure. “You awake with no real memories on the beach of a hidden archipelago crawling with BallBuds – elemental creatures that range from cute and cuddly to nightmare-fueled killing machines. Two factions of stranded survivors are waging all-out war: one led by a heavy metal maniac obsessed with “alpha energy,” and the other by a performative “spiritual” influencer who thinks kombucha and mushrooms can create world peace.”

For the promo campaign to work, the key art had to do more than just look good – it had to hold attention. At the same time, we couldn’t lose the project’s original visual language while pushing the image toward a more detailed, cinematic result.

Below is the breakdown of the process and where AI actually helped speed up the final refinements.

Preparation of the 3D Base

To achieve maximum authenticity, we requested a package of in-engine game characters from the developers. This allowed us to work with the original models and preserve the project’s stylistic integrity.

Before that, the characters themselves were assembled and refined in Character Creator 4. This stage allowed precise control over proportions, facial features, clothing, and poses based on the reference materials.

This workflow allowed us to maintain full control over the final look and prevent accidental proportion shifts or stylistic inconsistencies at later stages.

game characters 3d ballbuds
character in graphic editor 3d
3d modeling characters

Rendering and Artistic Enhancement

After the scene was rendered, we brought the image into Photoshop for the final pass. Here we worked over the render with photobashing, custom brushwork, additional lighting layers, atmospheric effects, and depth adjustments to refine the composition. 

The aim was to push the image beyond a raw render – increasing contrast, atmosphere, and visual tension so the final result feels more illustrative and expressive.


AI Integration in the Pipeline

Stable Diffusion was used at the final stage as a controlled detail-enhancement tool. We applied custom generation parameters tailored specifically to the project’s visual style.

The AI outputs were not used directly. Instead, they were blended into the base artwork through photobashing, using soft, semi-transparent layers. This approach allowed us to:

• enhance textural richness
• introduce micro-level detail
• achieve a more polished final look
• avoid the typical “neural” or synthetic appearance

AI functioned strictly as a supportive instrument rather than a primary visual source.

game art character low detalization
game art character high detalization


Visual Cohesion in Crowd Scenes

For large-scale crowd scenes, the client provided AI-generated sketches. Our goal was to retain the characters’ original authenticity while adapting them to the semi-realistic style used in the final render.

To achieve this, we combined 3D base work, manual detailing, and carefully controlled AI upscaling. This approach allowed the crowd to merge naturally into the scene, maintaining visual consistency with the overall composition without standing out stylistically.

Final Outcome

The Blauballs project became an example of a hybrid AI pipeline where:

• 3D ensured precision and structural control
• The artist defined the style and artistic expression
• AI polished the details and accelerated production

This approach allowed us to create a visually striking key art while maintaining full creative control at every stage of the production process.

AI wasn’t used to generate art based on other artists’ creations. It was used to save the time of our artists. The most tedious work was done 40% faster, and that time saving was crucial for the Kickstarter campaign. By the way, it gathered 7 times more money than the initial goal, and we are proud to have worked on this project.

Would you like to learn how AI-assisted pipelines can speed up the final polishing of game art? Ask our experts!

The post Inside the AI-Assisted Pipeline Behind the BallBuds Kickstarter Key Art appeared first on Kevuru Games.

]]>
The Future of 3D Modelling for AAA and Indie Games: Two Industries, Two Directions https://kevurugames.com/blog/the-future-of-3d-modelling-for-aaa-and-indie-games-two-industries-two-directions/ Wed, 11 Mar 2026 18:50:08 +0000 https://kevurugames.com/?p=26833 When people talk about the future of 3D modelling, they often see it as moving in one direction that is rather tech-driven: higher fidelity, more realism, more automation. But that assumes the industry moves as a single unit. That’s not exactly the case. AAA and indie studios are solving very different problems. One is scaling […]

The post The Future of 3D Modelling for AAA and Indie Games: Two Industries, Two Directions appeared first on Kevuru Games.

]]>
When people talk about the future of 3D modelling, they often see it as moving in one direction that is rather tech-driven: higher fidelity, more realism, more automation. But that assumes the industry moves as a single unit. That’s not exactly the case.

AAA and indie studios are solving very different problems. One is scaling production across hundreds of artists and terabytes of assets. The other is trying to create distinct visual identity with limited resources and small teams. The tools may overlap, but the priorities do not.

That divergence is what will shape the next decade of 3D modelling.

AAA is pushing toward industrialization – photogrammetry, scanning pipelines, high-density meshes rendered in real time. Indie is refining efficiency – stylization, modularity, clarity, and smart reuse.

Both are evolving. Just not in the same way.

AAA: From Sculpting Assets to Engineering Pipelines

In large-scale productions, 3D modelling is becoming less about isolated asset creation and more about system integration.

In big productions, modelling isn’t just about sculpting a beautiful asset and handing it over. It’s about how that asset lives inside a much larger machine.

Take The Last of Us Part II. A huge part of its visual realism comes from scanning real-world materials. Think about something simple like a rock. In older pipelines, someone would sculpt it from scratch in ZBrush, build the texture, tweak it, iterate. Today, teams often just go outside and scan a real one. They walk around it with a camera, shoot it from every angle, and feed those images into reconstruction software.

Image source: https://en.gamegpu.com/

But scanning is just the starting point. Raw scan data is messy. It needs cleanup, retopology, optimization, shader adjustments, and proper integration into lighting systems. It all takes many many hours of refinement.

Or look at Cyberpunk 2077. The density of that world – neon signage, layered props, detailed interiors – isn’t just the result of talented modellers. It’s the result of a structured asset library. Modular pieces are reused intelligently. Materials are standardized. Level art relies on shared kits to maintain consistency at scale.

Image credit: Cyberpunk 2077

In both cases, modelling isn’t isolated craftsmanship. It’s coordinated production. The result is visual density that would have been impossible a decade ago.

But here’s the shift: the bottleneck is no longer sculpting detail. It’s managing it.

AAA modelling is moving toward:

  • pipeline automation
  • asset version control at scale
  • LOD strategy aligned with real-time rendering systems
  • cross-department synchronization between art, tech art, and engine teams

The future AAA modeller will need to think beyond form and silhouette. They’ll need to understand memory budgets, shader complexity, streaming systems, and runtime performance constraints.

In other words, modelling is becoming more technical – not less artistic, but more systemic.

And that changes the role itself.

Indie: Style Over Scale

If AAA studios are trying to manage complexity, indie teams are usually trying to avoid it.

Smaller teams don’t have the luxury of scanning real-world materials or maintaining massive asset libraries. What they do have is control. Fewer people. Shorter pipelines. Faster decisions.

Take Valheim. The low-poly look isn’t there because the team couldn’t do more. It’s there because they didn’t need to. The shapes are simple, sometimes almost rough, but the atmosphere carries it. The game is not trying to compete with ultra-realistic AAA visuals. It’s following another path. Not every game has to impress everyone. If the style is clear and consistent, the audience will find it.

Image credit: https://www.valheimgame.com/

It’s fully 3D, large in scale, and still clearly indie in production logic. The world isn’t overloaded with micro-detail. Instead, it relies on modular industrial elements (pipes, conveyors, platforms, structural frames) all designed to snap together cleanly.

The visual identity doesn’t come from extreme realism. It comes from consistency. Surfaces are readable. Materials are controlled. Geometry is practical. Even when the player builds massive factories, the scene doesn’t collapse under visual noise because the modelling rules stay disciplined.

It’s a good reminder that scale doesn’t automatically require photorealism. You can build a complex 3D world without chasing cinematic density – as long as your asset system is coherent.

Satisfactory 1.0 Launch Trailer

In indie 3D modelling, efficiency becomes part of the design language.Instead of pushing fidelity higher and higher, teams often focus on readable shapes, modular environments, reusable props, and stylized materials that hide repetition. There’s less room for waste. Every asset has to justify the time spent on it.

That constraint often leads to smarter decisions. If AAA is solving “How do we handle more detail?”, indie is solving “How do we say more with less?”

And sometimes, that limitation becomes the advantage.

AI in 3D Modelling: What Actually Changes

Now comes the obvious question: where does AI fit into all of this? Not where most headlines suggest.

AI is not replacing sculpting in AAA pipelines, and it’s not suddenly building entire worlds for indie teams. What it’s doing – quietly – is reducing friction. In practice, AI shows up in very specific places:

  • automatic retopology suggestions
  • UV unwrapping assistance
  • texture upscaling
  • normal and height map generation
  • smart material variation
  • LOD creation support

These aren’t glamorous tasks. They’re time-consuming ones. For a AAA studio, shaving hours off repetitive cleanup across hundreds of assets can translate into weeks saved at production scale. For an indie team, it can mean the difference between shipping and slipping.

Character art before AI detalization

Character art polished with support of AI

The important shift is this: AI doesn’t create the core asset. It accelerates the parts that don’t require creative judgment. Here is the work that artists do:

  • define form
  • control proportions
  • shape silhouettes
  • establish material logic
  • set the visual tone

AI simply helps with the technical polish, especially where precision and repetition matter more than artistic intuition. The real impact won’t be visible in screenshots. It will be visible in production timelines.

And that’s where both AAA and indie teams start to converge – not in style, but in the need to move faster without lowering quality.

A Practical Example: Keeping Things Efficient on BallBuds

On projects like BallBuds at Kevuru Games, the challenge wasn’t visual overload or ultra-realism. It was speed, clarity, and consistency.

The game has a stylized direction, which immediately changes how you approach modelling. You’re not chasing micro-detail. You’re chasing clean shapes and readable forms that work well in motion.

ball buds 2d art

In that context, the biggest risk isn’t “not enough polygons.” It’s wasting time on polish that doesn’t affect player perception.

For BallBuds, the focus was on:

  • keeping geometry clean and lightweight
  • making sure silhouettes read clearly at gameplay distance
  • ensuring assets behaved correctly inside the engine
  • maintaining stylistic consistency across iterations

AI-assisted tools were used carefully, mostly where they reduced repetitive technical work. For example, speeding up texture refinement or helping generate small material variations that were later adjusted manually.

The key was control. Nothing was used raw. Everything was reviewed, refined, and aligned with the game’s established art direction. In a project like this, AI doesn’t redefine modelling. It protects time. And in smaller-scale productions, time is often the most limited resource.

The Skill Set Is Changing – Slowly, But Clearly

One of the biggest shifts isn’t happening in software. It’s happening in expectations. Ten years ago, a strong 3D modeller could focus almost entirely on sculpting and texturing. Today, especially in larger teams, that’s rarely enough.

In AAA environments, artists are expected to understand how their assets behave in engine. That means thinking about:

  • poly density distribution
  • LOD transitions
  • shader complexity
  • material instancing
  • streaming constraints
  • and many more…

It’s no longer just “Does this look good in Marmoset?” It’s “Does this hold up under dynamic lighting, at runtime, with dozens of similar assets loaded?”

Indie teams face a different pressure. There, the modeller often wears multiple hats. You might model, texture, set up materials, drop assets into the engine, and even adjust lighting. The workflow is tighter, but the responsibility is broader.

What’s interesting is that both paths demand more awareness of systems.

The future 3D artist isn’t becoming less creative. But they are becoming more technical. They need to understand how their work fits into performance budgets, production timelines, and pipeline logic. And this doesn’t mean everyone becomes a technical artist. It means the wall between “art” and “tech” is thinner than it used to be.

The modeller of the future will still care about form and composition. But they’ll also think about efficiency, integration, and iteration speed – because that’s where modern production lives.

What Won’t Change

With all the talk about AI, scanning, real-time pipelines, and automation, it’s easy to assume that everything about 3D modelling is being rewritten.

It isn’t.

Some fundamentals haven’t moved in decades – and probably won’t. A strong silhouette still matters more than micro-detail. If a character or prop doesn’t read clearly from gameplay distance, no amount of texture resolution will fix it.

Proportions still determine believability. Even in stylized worlds, internal logic has to hold. If something feels “off,” players notice – even if they can’t explain why. Material logic still drives realism. Wood has weight. Metal reflects differently depending on roughness. Fabric folds in predictable ways. These aren’t trends. They’re observation skills.

And perhaps most importantly: cohesion still beats complexity. A consistent art direction with moderate detail almost always ages better than hyper-detailed assets stitched together without a clear visual language. That’s true in AAA. It’s even more obvious in indie.

Technology cycles every few years. Engines change. Tools improve. AI tools evolve. Taste evolves much slower. No matter how advanced pipelines become, modelling will still depend on observation, design intent, proportion control, visual hierarchy, and clarity in gameplay context.

In other words, the craft doesn’t disappear. It just operates inside smarter systems. And that might be the most realistic way to think about the future.

Two Roads, One Discipline

If you zoom out, the future of 3D modelling doesn’t point in one direction. It splits.

AAA studios will continue pushing scale – more data, more density, more integration between departments. Their challenge will be managing complexity without slowing production.

Indie teams will continue refining efficiency – stronger style, smarter reuse, clearer pipelines. Their challenge will be standing out without chasing technical arms races.

The interesting part is that both sides are learning from each other. AAA is starting to value stylization and readability again, especially for gameplay clarity. Indie teams are adopting more advanced tools to speed up iteration without inflating scope.

And across both, one pattern is clear: The future is less about “more polygons” and more about smarter decisions. Smarter asset reuse. Smarter integration with engine constraints. Smarter use of automation. Smarter production planning.

The modeller of the next decade won’t win by simply adding more detail. They’ll win by understanding where detail matters — and where it doesn’t.

In the end, 3D modelling isn’t disappearing into AI or being swallowed by automation. It’s becoming more strategic. The craft remains. The environment around it gets faster. And the studios that understand that balance – whether AAA or indie – will shape what the next generation of games actually looks like.

A Few Numbers That Explain Where Things Are Going

Trends and future projections are not the most reliable source, even when provided by top industry professionals. But here are a few data points that help to get some good ground for the state of 3D modeling now:

  • In Google Cloud’s 2025 developer research (615 developers surveyed), 87% said they already use some form of AI in their workflows, and 95% said it reduces repetitive tasks. Around 44% of developers use agents to optimize content and process information such as text, voice, code, audio, and video rapidly.
  • GDC’s 2025 State of the Game Industry coverage reported that 52% of surveyed developers work at companies that have implemented generative AI, and 36% personally use it
  • The same report shows what exactly gen AI is used for: research and brainstorming (81%), administrative tasks like email (47%), prototyping (35%), testing or debugging (22%), asset generation (19%), player-facing features (5%).
  • Generative AI has received lots of criticism from the professional community. Set aside copyright concerns, many developers think it has a negative impact on different areas. And the number of people who think so is rising – 52% in 2026 GDC report compared to 30% in 2025. Only 7% of respondents saw it as positive in 2026.

Now, here’s the useful part for this article: those numbers don’t mean “AI is making games.” They mostly mean teams are trying to compress production time, and 3D art pipelines are one of the biggest places to do it.

What this looks like in practice

Pipeline pressureAAA realityIndie realityWhat’s getting adopted first
Asset volumeThousands of assets, many owners, strict consistencySmall libraries, fewer assets, faster iterationStandardized kits, reuse systems, strict naming/versioning
Geometry strategyDense meshes can survive longer in-engine (Nanite-style), but still need rulesGeometry kept simple for speed and readabilityMore modular modelling, fewer unique hero assets
Time sinksCleanup across many assets becomes the hidden cost“Polish time” can kill shipping datesTools that reduce repetitive work (UV/retopo helpers, detail polish)
AI usage patternPipeline acceleration at scaleTime protection for small teamsAssistive steps, not raw outputs 

The Unseen Part of 3D Modelling

When people imagine the future of 3D modelling, they often think about visible change – higher fidelity, better shaders, more realistic lighting.

But most production friction doesn’t live there. It lives in the small, repetitive steps that multiply across dozens or hundreds of assets. Retopology that has to be redone. UV layouts that need adjustment after scale changes. LOD chains that don’t transition smoothly. Materials that break under a different lighting setup. Assets that technically look fine but fail memory or streaming constraints.

In AAA, this friction compounds because of scale. One inefficient workflow multiplied by 2,000 assets becomes a scheduling problem.

In indie, the friction is different (but not too much). When a team of several people does all the job, performing 15 roles, the time that can be saved is even more precious.

That’s why the future of 3D modelling may not look dramatic from the outside. The real evolution will be in compression, which means:

  • fewer manual passes
  • better interoperability between tools
  • smarter asset validation inside engines
  • earlier performance feedback
  • clearer modular standards

In AAA, this means pipelines that flag issues before they cascade. In indie, it means tools that reduce iteration fatigue.

The irony is that players won’t see most of this. They won’t know an asset passed through automated validation or that LOD transitions were generated with assistance. What they will see is stability. Cohesion. Fewer visual inconsistencies. More reliable performance. And that’s where the future becomes less about spectacle and more about discipline.

Against the Stereotype. Why Photorealism Is Not Always Progress

There is a quiet assumption in the industry that more realism equals advancement. Higher resolution textures, denser meshes, physically accurate shaders – all of it is framed as evolution. And in some cases, it is. But it’s not automatically improvement.

Photorealism increases production cost exponentially. Every surface demands believable wear. Every prop must survive scrutiny in close-up shots. Lighting becomes less forgiving. Animation errors stand out more. What once could be suggested now has to be fully justified.

In large AAA productions, this makes sense, as cinematic immersion is what players often expect from the large releases. Titles like Red Dead Redemption 2 built entire ecosystems of detail – from weather systems to animal behaviors – to support visual realism. But that level of density sometimes is the reason why such productions spend years in the so-called development hell.

For smaller teams, chasing the same benchmark can become a trap. Increasing geometric detail does not automatically improve player experience. In many cases, clarity and responsiveness matter more than surface complexity.

Stylization, when intentional, often scales better. It creates stronger identity. It ages more gracefully. It reduces the burden of perfect physical accuracy. And it allows teams to allocate time toward mechanics, level design, and polish rather than microscopic texture adjustments.

The future of 3D modelling may actually involve more conscious restraint. Not because technology can’t handle more detail – but because design priorities don’t always benefit from it. Higher poly counts are a technical achievement. They are not a design goal. And that distinction will become increasingly important as tools continue to remove technical limits.

The Future Is a Choice, Not a Direction

If there’s one mistake the industry keeps making, it’s assuming that technology sets the course.

It doesn’t.

Engines will get faster. Geometry limits will stretch. AI tools will compress production time. But none of that decides what games should look like. It only expands what is possible.

AAA studios will continue building massive, technically astonishing worlds. Indie teams will continue proving that clarity, style, and strong art direction can outperform raw density. Both approaches will coexist – sometimes even merge.

What will matter most in the next decade of 3D modelling isn’t how much detail we can push. It’s how deliberately we use it.

The strongest teams won’t be the ones with the most polygons. They’ll be the ones who understand where detail creates value – and where it simply creates noise. As fast as technology accelerates, taste, judgment, and restraint will always be the ones that decide whether it all makes sense.

The post The Future of 3D Modelling for AAA and Indie Games: Two Industries, Two Directions appeared first on Kevuru Games.

]]>
AI in Game Design: How Agencies Create Smarter Player Experiences And Where Is It All Going https://kevurugames.com/blog/ai-in-game-design-how-agencies-create-smarter-player-experiences-and-where-is-it-all-going/ Wed, 11 Mar 2026 18:41:23 +0000 https://kevurugames.com/?p=26815 When was the last time a month passed without a new AI-related scandal in gaming industry? Art used without authorization, massive layoffs caused by AI replacing humans, and numerous times when companies showcase using AI in games, when it doesn’t bring any clear value. Most players are highly critical of that behaviour, and developers at […]

The post AI in Game Design: How Agencies Create Smarter Player Experiences And Where Is It All Going appeared first on Kevuru Games.

]]>
When was the last time a month passed without a new AI-related scandal in gaming industry? Art used without authorization, massive layoffs caused by AI replacing humans, and numerous times when companies showcase using AI in games, when it doesn’t bring any clear value.

Most players are highly critical of that behaviour, and developers at large share this opinion – more than 50% of industry professionals think that AI has a negative impact on gaming. So, why is it spreading despite all of this? It’s not just the hype cycle. We believe that the secret lies in how AI is used – not to replace people and create low-quality gaming experiments, but as a working tool, effective and completely ethical. And to start with it, let’s first look at the history of AI in game design.

What Is AI in Games? Not Just NPC Logic

For years, AI in games meant one thing – enemy behavior. Pathfinding, state machines, scripted reactions. If an NPC could take cover or flank the player, it was considered advanced.

That definition no longer holds. Today, AI can do a lot more, penetrating all parts of game development process. The focus has shifted from “How smart is this enemy?” to “How intelligently does this game respond to the player?”

According to Google Cloud survey, 87% of video game developers use AI agents. It doesn’t mean they let AI design games instead of them. Instead, the tools can help them save time on boring tasks and do their main job faster.

Game design agencies often have strong creative direction and gameplay expertise. What they may lack is the infrastructure and data-layer architecture needed to design intelligent systems that scale. Agencies specializing in AI don’t replace designers – they extend them. They build frameworks that allow designers to move from handcrafted scripts to adaptive systems.

Smarter experiences are not about making games harder. They are about making games more responsive.

A well-designed AI system can:

  • detect when a player is disengaging
  • adjust challenge curves dynamically
  • personalize rewards without breaking economy balance
  • identify friction before churn happens

This changes the design philosophy itself. Instead of shipping static content, teams design systems that evolve in response to player behavior.

And this is where the real transformation happens. AI in modern game design is less about spectacle and more about structure. It’s not the visible trick. It’s the invisible layer that makes everything feel intentional.

In the next section, we’ll look at how this evolution happened – from rigid scripting to adaptive design systems that learn and respond over time.

From Rule-Based Logic to Adaptive Systems: The Real Evolution of Game AI

What the industry has historically called “AI” in games was not artificial intelligence in the machine learning sense. It was deterministic decision logic designed to simulate intelligence.

If you go back to the 80s and early 90s, what we called “AI” was mostly clever rule design.

Take Pac-Man. The ghosts didn’t think. Each one followed a specific movement pattern coded directly into the game. They felt different because their rules were different. That was the trick. No learning, no adaptation — just tightly written behavior.

Looking at this from now, we wouldn’t even call it AI. In 2020, NVIDIA researchers created an AI model that can generate a fully functional version of Pac-Man without an underlying game engine. They did it by training the model on 50,000 episodes of the game – no rules.

Jump to the mid-90s, legendary Quake. This game brought quite some innovation in the world of game development, and here are the mechanics that bring us closer to modern AI. Enemies could switch states, such as patrol, chase, attack, retreat, depending on what the player did. It looked reactive, and at that time, no other game had those dynamics. But everything they did was still defined ahead of time. The system didn’t change. It executed.

Image credit: Microsoft

Curiously, in 2025, Microsoft made a thing similar to what NVIDIA did with Pac-Man five years earlier. They released a new Copilot feature that recreates Quake 2 in real time using AI as it’s being played. But this time, it wasn’t perceived as an interesting experiment. Players were largely frustrated by how company presented a game that was just simply worse as something exciting. That’s one of those cases when AI was used for no apparent reason than to “show off what AI can do”. But let’s get back to the history again.

By the early 2000s, AAA game companies developing titles such as F.E.A.R. raised the bar again. Enemies seemed coordinated. They took cover, flanked, shouted to each other. Players described them as “smart.” In reality, these behaviors were driven by structured decision trees. Complex, yes. Adaptive, no. And yet, 20 years later, people on Reddit claim that it was the best AI in FPS games ever.

Around that same era, developers started experimenting with utility-based systems. To make NPCs’ reactions smarter and more variable, designers assigned scores to possible actions. The system would evaluate the situation and pick the highest-scoring option. This allowed for more variability, but the logic still depended on handcrafted weights.

All of these approaches shared one trait:

They did not learn from player behavior.

They executed predefined logic.

And the secret to the best AI was about how it all was executed, not just smart technology, but overall team professionalism and dedication.

The real shift began in the 2010s, when large-scale telemetry became standard in online and live-service games. Telemetry is essentially the automatic collection of gameplay data. Every time a player completes a level, quits mid-session, fails a boss fight three times, purchases an item, or spends five minutes stuck in one area, that information can be recorded. Not personal data – but behavioral signals.

Instead of guessing how players behave, game development companies could now see patterns at scale. They could measure where frustration spikes, where engagement drops, and how progression actually unfolds in real play.

Studios started collecting behavioral data at scale:

  • session length
  • failure frequency
  • progression pacing
  • monetization interaction
  • churn indicators

This data layer made something new possible — adaptive systems.

Instead of asking:

“What should the NPC do in this scenario?”

Designers began asking:

“How should the system respond to this player?”

Dynamic difficulty adjustment, live economy balancing, and personalized event tuning emerged from this shift. The AI layer moved from character behavior to system intelligence.

Later on, systems started affecting more than moment-to-moment combat. In Middle-earth: Shadow of Mordor, the Nemesis System tracked how you interacted with specific enemies. Orcs stopped being a uniform mass of objects to fight. If you humiliated or escaped one, he might remember it the next time you met. The hierarchy shifted based on those encounters.

Image credit: thegamer.com

It wasn’t machine learning. The rules were still predefined. But the structure allowed outcomes to feel personal and unpredictable. That was the turning point – not smarter enemies, but systems that reshaped the world around the player’s actions. The Nemesis System was such a successful and innovative game mechanic that it was patented in 2021 (which unfortunately limited its use in other games).

Today, with machine learning integration becoming more common in production pipelines, the distinction is clearer:

Rule-based AI follows instructions written in advance.

Adaptive AI looks at player behavior and modifies systems over time.

The terminology has changed a lot in recent years, and so has the role of agencies building these systems.

Where AI Is Most Commonly Used in Games Today

The fact that the vast majority of game developers use AI nowadays probably isn`t surprising to you. The question is, where exactly and how is it used? Nobody wants to think that their favourite game characters were generated by AI, but knowing that the game can be delivered in 1 year instead of waiting for 5 years (thanks to AI) – that thing people surely wouldn’t mind. 

Here are the most common areas where AI is actively used today.

1. Production and Asset Creation

This is currently the largest area of adoption.

AI tools are widely used for:

  • concept iteration and visual exploration
  • texture upscaling and enhancement
  • animation cleanup and retargeting
  • voice prototyping and localization support
  • code assistance

Game art outsourcing studios are not replacing the creative process with generative AI. They are accelerating iteration. Instead of spending days generating multiple visual variations, teams can test directions faster and refine manually afterward. Here’s a detailed explanation of how we did it for BallBuds.

BallBuds. Character art before AI detalization

BallBuds. Game art from Kevuru Games portfolio enhanced with AI tools

For example, texture upscaling tools based on neural networks are commonly used to remaster older titles. NPC voice prototyping with AI-generated speech allows narrative teams to test pacing before final recording. The key pattern here is optimization, not automation.

2. Player Analytics and Retention Modeling

Live-service and mobile games rely heavily on behavioral data. AI models are used to:

  • predict churn probability
  • segment players by engagement patterns
  • optimize reward timing
  • personalize event difficulty
  • recommend in-game offers

This is especially common in free-to-play ecosystems. For example, many mobile strategy and RPG titles dynamically tune offers and event rewards based on player progression data. While companies rarely publish full technical details, this kind of predictive modeling has become industry standard in mobile analytics platforms.

3. Dynamic Difficulty Adjustment

Adaptive difficulty has existed for decades, but modern systems are more data-driven. Instead of switching between predefined difficulty modes, AI-based approaches can monitor:

  • reaction times
  • failure frequency
  • resource depletion rates
  • time spent per encounter

The system can then subtly adjust enemy health, spawn density, or loot drops.

Another good use of AI is matchmaking in games where several online players have to be assigned to play with each other. It works perfectly with the task of balancing skill levels. While not machine learning in every case, ranking and performance prediction systems are increasingly data-informed and continuously recalibrated. The goal is not to make the game easier — it is to maintain engagement.

4. Procedural Generation and Content Scaling

AI is also used to support procedural world-building. Earlier procedural systems relied on mathematical noise functions and rule combinations. Today, AI-assisted generation helps with:

  • terrain generation
  • quest variation
  • dialogue expansion
  • environmental detail enhancement

Games like No Man’s Sky rely heavily on procedural systems to create large-scale worlds. While not purely machine-learning driven, the principle remains the same: algorithmic systems extend content beyond manual capacity. Modern AI tools are now being layered on top of these systems to add variety and reduce repetition.

Here is another example, very different from No Man’s Sky: Candy Crush Saga, a mobile matching game. Candy Crush has an extra high income while having a rather small team working on it. The game releases lots of new levels regularly, and the creation of these levels is now the AI’s job. If you are curious to learn more numbers and secrets behind the success of Candy Crush, read our article.

5. Testing and Quality Assurance

One of the least visible but most practical uses of AI is automated testing. AI agents can simulate player behavior to:

  • identify level-breaking paths
  • stress-test economies
  • detect balance exploits
  • uncover collision bugs

In large-scale multiplayer environments, this reduces manual QA workload and shortens iteration cycles. This area is growing quickly because it produces measurable cost savings.

The Pattern Across All Categories

Across production, analytics, balancing, and testing, one pattern repeats: AI is used to increase speed, scale, and precision.

It is rarely the creative decision-maker.

It is increasingly the optimization layer.

Case Scenario: Adaptive Difficulty in a Mid-Core Action Game

Imagine a mid-core action game with skill-based combat and progression tied to gear upgrades. Something like Resident Evil 4 or Hades. When the main game development stages are over, it needs to be perfected before the release. During beta testing, the team notices a familiar pattern.

New players drop off after the third boss encounter.

Experienced players move through early levels too quickly and disengage before mid-game systems unfold.

The traditional solution would be to tweak health values, adjust damage numbers, and rebalance difficulty tiers manually. That works, but it treats the audience as a single group. Instead, the team implements a lightweight adaptive system.

First, telemetry is structured to capture meaningful signals:

  • number of failed attempts per encounter
  • time-to-clear per level
  • healing item usage rate
  • reaction window timing
  • upgrade frequency

Within weeks, patterns emerge. Players struggling with boss mechanics show a specific behavioral signature: repeated short attempts, low healing consumption, and rapid retries.

Rather than lowering difficulty globally, the system adjusts selectively:

  • slightly extends parry timing windows for flagged players
  • reduces secondary enemy spawn frequency during boss fights
  • increases early gear drop probability

The changes are subtle. Most players never notice them directly. But frustration curves flatten. Retention improves.

Meanwhile, high-skill players trigger the opposite response. Enemy aggression increases marginally. Reward pacing slows to maintain challenge.

This isn’t machine learning in the cinematic sense. It’s structured data interpretation connected to controlled design levers.

The important part is architecture. Designers define what signals matter, what thresholds trigger adjustments, what parameters are safe to modify, and so on.

AI does not “decide” creatively. It monitors and activates predefined flexibility ranges. The result is not a different game for each player. It’s a game that responds within boundaries set by the design team.

And that is typically where agencies come in – designing the telemetry layer, building the response framework, and ensuring that adaptation enhances experience rather than destabilizing balance.

Conclusion. What the Next Few Years Look Like for AI in Game Development

The direction is pretty clear: AI use will keep expanding, but most of it will sit behind the curtain. Not “games made by AI,” but games made faster, tuned more precisely, and operated with more data-awareness.

What the data says right now:

  • AI adoption in dev workflows is already mainstream. Google Cloud’s Harris Poll study reported 90% of surveyed developers were integrating AI into workflows, and 95% said it reduces repetitive tasks.
  • Unity’s industry report points in the same direction, with broad adoption of AI tools in select workflows and a focus on speed and efficiency rather than replacement.
  • At the same time, player-facing use is still limited. A recent GDC survey summary reported that only a small share of developers are applying genAI directly to player-facing features, even while usage for research, brainstorming, and admin work is common.
  • Sentiment is mixed and getting tougher. The same GDC reporting shows that more than half of developers view genAI as having a negative impact on the industry (but they still use it anyway).

Where this is heading, based on those signals:

  • more “AI as copilot” in production – faster iteration, faster prototyping, faster localization, faster QA, more tooling around pipelines
  • more “AI as operations layer” in live games – balancing, tuning, moderation, and personalization powered by telemetry and guardrails (not freeform generation)
  • more governance, provenance, and rights management – adoption continues, but teams will be stricter about what data is used, what’s allowed in the pipeline, and what ships to players

A useful way to phrase the future without hype:

AI will push the industry toward smarter systems and faster production cycles, but the winning teams will treat it like infrastructure. Controlled, measurable, and aligned with art direction and design intent.

Our game artists and developers at Kevuru Games always try to find the balance between using new technologies to improve and optimize their work and adopting new technologies for the sake of it. If you have to look for specific uses for AI, you probably don’t need to. 

Here is what our 3D game development expert, Olga Andrianova, says:

Using models trained on other artists’ work is a harsh no. But if you manage to save hours of time on polishing little details thanks to AI tools, there is no shame in using them.

In a recent project, using AI tools in the pipeline helped deliver art 40% faster than usual – all this without generating images. Curious to find out how we did it? Read on here.

The post AI in Game Design: How Agencies Create Smarter Player Experiences And Where Is It All Going appeared first on Kevuru Games.

]]>