Quick tip – forward slash autocomplete

A coworker of mine was complaining the other day that Visual Studio’s autocomplete defaults to using backslashes in #include paths–this is pretty annoying if you’re doing cross-platform development since most platforms only allows forward slashes.

Thankfully, you can fix this!–but the setting is a bit buried. In Visual Studio 2017, from Tools->Options:

Alternately, if (like me), you never use autocomplete for include paths, you can use the setting just above that to disable it.

Quick tip – forcing your app to use the higher-performance GPU

I recently switched from a home-built desktop PC to a laptop with an external GPU enclosure, and was surprised to discover that bytopia immediately crashed on startup on this system.

It turns out the NVIDIA driver isn’t always too smart about choosing which GPU to assign to a particular app, and was giving me the integrated Intel chip, which lacked the OpenGL 4.5 features that I’m using.1

You can of course solve this locally in the NVIDIA driver settings by forcing it to use the high-performance GPU, but I’d rather not have to ask every user to figure that out. So, after some google searching (resulting in a few false starts), I found this NVIDIA technical note, which explains the very hacky process by which you can force your app to use the high-performance GPU:

extern "C" {
    _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

This needs to be in the executable; (annoyingly) it won’t work in a DLL. And, needless to say, this solution is Windows-only; I don’t know if there’s something equivalent for OSX and/or Linux systems.

It turns out that AMD’s method is similar, except their variable is called AmdPowerXpressRequestHighPerformance.2 So, to cover all bases:

extern "C" {
    _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
    _declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001;
}

I don’t have an AMD card handy so I couldn’t test the AMD version, but I can verify the NVIDIA version worked for me, with one caveat: it doesn’t work with a debugger attached.3  So I still ended up having to force the GPU choice in the driver settings locally, but at least when I distribute the game it will work for other people.


1 I will likely support earlier OpenGL versions eventually, although the performance of the Intel chips I’d need it for is so poor I’m not sure it’s worth the effort–and using direct state access does make the rendering layer easier to read…
2 Via this thread on AMD’s community forums. I don’t have an AMD graphics card to test this with, though, so I’m taking their word for it.
3 Presumably the driver has to inspect the executable to see if it exports the NvOptimusEnablement variable, and having the debugger attached prevents that somehow. (Full disclosure: I don’t know much about how debuggers work. 😛 )

Quick Tip – disabling optimization without getting yourself in trouble

If you’ve worked on a nontrivial game in C++, you’ve probably run into a situation where you’d like to step through some code in the debugger, but the debug build of your game is painfully slow and debugging in release mode is difficult and time-consuming.

Here’s something you probably know (but if you don’t, it will change your life): you can disable optimization selectively per file, and thus have access to good debug information while not crippling your performance by running in debug mode.

In the Microsoft compiler, you do it this way:

#pragma optimize("", off)

This will turn off all optimization in whatever source file you put it in. (The optimize pragma has some options to make the changes more granular, but I’ve never really had a need for that.)

There is, however, a subtle problem here: it is very easy to forget to remove that one innocuous line after adding it–and end up, in the worst case, shipping your game with optimization turned off for some files.1 It would be nice if the compiler would let us know if we forgot to remove this pragma, right?

What I do is create a no_opt.h header and include it in any files I’d like to be able to step through:

#pragma once

#pragma message("no_opt.h included in " __FILE__)

#ifdef _RELEASE_FINAL
#error no_opt.h included in release final build; remove.
#else
#pragma optimize("", off)
#endif

Replace _RELEASE_FINAL with whatever your “final” build configuration you release to end users–i.e. the configuration that’s build by your build system.2

With that, the compiler will spit out a message for every file that has optimizations disabled. Further, if you don’t notice the message, it will fail to compile on your final build, giving you a second chance to remove the header.

(I’ve only really done this in Microsoft’s compiler personally, but clang and gcc appear to have similar pragmas available, so it should be easy to extend this to them.)


1. If you’re thinking to yourself that you’re not that forgetful and have always remembered to remove it, I’m sorry to report that you’ve almost definitely shipped code with optimizations disabled.
2. This practice may sound strange if you’re not familiar with it, but at least in game development it’s not unusual to have a “release” build and a “final release” build, where the former will enable optimizations and the latter might go further by turning off developer tools, removing symbols, etc.