Tag: vulkan

Entries for tag "vulkan", ordered from most recent. Entry count: 30.

Pages: 1 2 3 4 >

# Which Values Are Scalar in a Shader?

Wed
14
Oct 2020

GPUs are highly parallel processors. Within one draw call or a compute dispatch there might be thousands or millions of invocations of your shader. Some variables in such a shader have constant value for all invocations in the draw call / dispatch. We can call them constant or uniform. A literal constant like 23.0 is surely such a value and so is a variable read from a constant (uniform) buffer, let’s call it cbScaleFactor, or any calculation on such data, like (cbScaleFactor.x + cbScaleFactor.y) * 2.0 - 1.0.

Other values may vary from thread to thread. These will surely be vertex attributes, as well as system value semantics like SV_Position in a pixel shader (denoting the position of the current pixel on the screen), SV_GroupThreadID in a compute shader (identifier of the current thread within a thread group), and any calculations based on them. For example, sampling a texture using non-constant UV coordinates will result in a non-constant color value.

But there is another level of grouping of threads. GPU cores (Compute Units, Execution Units, CUDA Cores, however we call them) execute a number of threads at once in a SIMD fashion. It would be more correctly to say SIMT. For the explanation of the difference see my old post: “How Do Graphics Cards Execute Vector Instructions?” It’s usually something like 8, 16, 32, 64 threads executing on one core, together called a wave in HLSL and a subgroup in GLSL.

Normally you don’t need to care about this fact. However, recent versions of HLSL and GLSL added intrinsic functions that allow to exchange data between lanes (threads/invocations within a wave/subgroup) - see “HLSL Shader Model 6.0” or “Vulkan Subgroup Tutorial”. Using them may allow to optimize shader performance.

This another level of grouping yields a possibility for a variable to be or not to be uniform (to have the same value) across a single wave, even if it’s not constant across the entire draw call or dispatch. We can also call it scalar, as it tends to go to scalar registers (SGPRs) rather than vector registers (VGPRs) on AMD architecture, which is overall good for performance. Simple cases like the ones I mentioned above still apply. What’s constant across the entire draw call is also scalar within a wave. What varies from thread to thread is not scalar. Some wave functions like WaveReadLaneFirst, WaveActiveMax, WaveActiveAllTrue return the same value for all threads, so it’s always scalar.

Knowing which values are scalar and which ones may not be is necessary in some cases. For example, indexing buffer or texture array requires special keyword NonUniformResourceIndex if the index is not uniform across the wave. I warned about it in my blog post “Direct3D 12 - Watch out for non-uniform resource index!”. Back then I was working on shader compiler at Intel, helping to finish DX12 implementation before the release of Windows 10. Now, 5 years later, it is still a tricky thing to get right.

Another such case is a function WaveReadLaneAt which “returns the value of the expression for the given lane index within the specified wave”. The index of the lane to fetch was required to be scalar, but developers discovered it actually works fine to use a dynamically varying value for it, like Ken Hu in his blog post “HLSL pitfalls”. Now Microsoft formally admitted that it is working and allowed LaneIndex to be any value by making this GitHub commit to their documentation.

If this is so important to know where an argument needs to be scalar and which values are scalar, you should also know about some less obvious, tricky ones.

SV_GroupID in compute shader – identifier of the group within a compute dispatch. This one surely is uniform across the wave. I didn’t search specifications for this topic, but it seems obvious that if a groupshared memory is private to a thread group and a synchronization barrier can be issued across a thread group, threads from different groups cannot be assigned to a single wave. Otherwise everything would break.

SV_InstanceID in vertex shader – index of an instance within an instanced draw call. It looks similar, but the answer is actually opposite. I’ve seen discussions about it many times. It is not guaranteed anywhere that threads in one wave will calculate vertices of the same instance. While inconvenient for those who would like to optimize their vertex shader using wave functions, it actually gives a graphics driver an opportunity to increase utilization by packing vertices from multiple instances into one wave.

SV_GroupThreadID.xyz in compute shader – identifier of the thread within a thread group in a particular dimension. Article “Porting Detroit: Become Human from PlayStation® 4 to PC – Part 2” on GPUOpen.com suggests that by using [numthreads(64,2,1)], you can be sure that waves will be dispatched as 32x1x1 or 64x1x1, so that SV_GroupThreadID.y will be scalar across a wave. It may be true for AMD architecture and other GPUs currently on the market, so relying on this may be a good optimization opportunity on consoles with a known fixed hardware, but it is not formally correct to assume this on any PC. Neither D3D nor Vulkan specification says that threads from a compute thread group are assigned to waves in row-major order. The order is undefined, so theoretically a driver in a new version may decide to spawn waves of 16x2x1. It is also not guaranteed that some mysterious new GPU couldn’t appear in the future that is 128-lane wide. WaveGetLaneCount function says “the result will be between 4 and 128”. Such GPU would execute entire 64x2x1 group as a single wave. In both cases, SV_GroupThreadID.y wouldn’t be scalar.

Long story short: Unless you can prove otherwise, always assume that what is not uniform (constant) across the entire draw call or dispatch is also not uniform (scalar) across the wave.

Comments | #gpu #directx #vulkan #optimization Share

# System Value Semantics in Compute Shaders - Cheat Sheet

Tue
29
Sep 2020

After compute shaders appeared, programmers no longer need to pretend they do graphics and render pixels when they want to do some general-purpose computations on a GPU (GPGPU). They can just dispatch a shader that reads and writes memory in a custom way. Such shader is a short (or not so short) program to be invoked thousands or millions of times to process a piece of data. To work correctly, it needs to know which is the current thread. Threads (invocations) of a compute shader are not just indexed linearly as 0, 1, 2, ... It's more complex than that. Their indexing can use up to 3 dimensions, which simplifies operation on some data like images or matrices. They also come in groups, with the number of threads in one group declared statically as part of the shader code and the number of groups to execute passed dynamically in CPU code when dispatching the shader.

This raises a question of how to identify the current thread. HLSL offers a number of system-value semantics for this purpose and so does GLSL by defining equivalent built-in variables. For long time I couldn't remember their names, as the ones in HLSL are quite misleading. If GroupID is an ID of the entire group, and GroupThreadID is an ID of the thread within a group, GroupIndex should be a flattened index of the entire group, right? Wrong! It's actually an index of a single thread within a group. GLSL is more consistent in this regard, clearly stating "WorkGroup" versus "Invocation" and "Local" versus "Global". So, although Microsoft provides a great explanation of their SVs with a picture on pages like SV_DispatchThreadID, I thought it would be nice to gather all this in form of a table, a small cheat sheet:

HLSL SemanticsGLSL VariableType (Dimension)UnitReference
SV_GroupIDgl_WorkGroupIDuint3 (3D)Entire groupGlobal in dispatch
SV_GroupThreadIDgl_LocalInvocationIDuint3 (3D)Single threadLocal in group
SV_DispatchThreadIDgl_GlobalInvocationIDuint3 (3D)Single threadGlobal in dispatch
SV_GroupIndexgl_LocalInvocationIndexuint (flattened)Single threadLocal in group

Comments | #gpu #directx #opengl #vulkan Share

# Why Not Use Heterogeneous Multi-GPU?

Wed
22
Jul 2020

There was an interesting discussion recently on one Slack channel about using integrated GPU (iGPU) together with discrete GPU (dGPU). Many sound ideas were said there, so I think it's worth writing them down. But because I probably never blogged about multi-GPU before, few words of introduction first:

The idea to use multiple GPUs in one program is not new, but not very widespread either. In old graphics APIs like Direct3D 11 it wasn't easy to implement. Doing it right in a complex game often involved engaging driver engineers from the GPU manufacturer (like AMD, NVIDIA) or using custom vendor extensions (like AMD GPU Services - see for example Explicit Crossfire API).

New generation of graphics APIs – Direct3D 12 and Vulkan – are lower level, give more direct access to the hardware. This includes the possibility to implement multi-GPU support on your own. There are two modes of operation. If the GPUs are identical (e.g. two graphics cards of the same model plugged to the motherboard), you can use them as one device object. In D3D12 you then index them as Node 0, Node 1, ... and specify NodeMask bit mask when allocating GPU memory, submitting commands and doing all sorts of GPU things. Similarly, in Vulkan you have VK_KHR_device_group extension available that allows you to create one logical device object that will use multiple physical devices.

But this post is about heterogeneous/asymmetric multi-GPU, where there are two different GPUs installed in the system, e.g. one integrated with the CPU and one discrete. A common example is a laptop with "switchable graphics", which may have an Intel CPU with their integrated “HD” graphics plus a NVIDIA GPU. There may even be two different GPUs from the same manufacturer! My new laptop (ASUS TUF Gaming FX505DY) has AMD Radeon Vega 8 + Radeon RX 560X. Another example is a desktop PC with CPU-integrated graphics and a discrete graphics card installed. Such combination may still be used by a single app, but to do that, you must create and use two separate Device objects. But whether you could, doesn't mean you should…

First question is: Are there games that support this technique? Probably few… There is just one example I heard of: Ashes of the Singularity by Oxide Games, and it was many years ago, when DX12 was still fresh. Other than that, there are mostly tech demos, e.g. "WITCH CHAPTER 0 [cry]" by Square Enix as described on DirectX Developer Blog (also 5 years old).

iGPU typically has lower computational power than dGPU. It could accelerate some pieces of computations needed each frame. One idea is to hand over the already rendered 3D scene to the iGPU so it can finish it with screen-space postprocessing effects and present it, which sounds even better if the display is connected to iGPU. Another option is to accelerate some computations, like occlusion culling, particles, or water simulation. There are some excellent learning materials about this technique. The best one I can think of is: Multi-Adapter with Integrated and Discrete GPUs by Allen Hux (Intel), GDC 2020.

However, there are many drawbacks of this technique, which were discussed in the Slack chat I mentioned:

Conclusion: Supporting heterogeneous multi-GPU in a game engine sounds like an interesting technical challenge, but better think twice before doing it in a production code.

BTW If you just want to use just one GPU and worry about the selection of the right one, see my old post: Switchable graphics versus D3D11 adapters.

Comments | #rendering #directx #vulkan #microsoft Share

# Texture Compression: What Can It Mean?

Sun
15
Mar 2020

"Data compression - the process of encoding information using fewer bits than the original representation." That's the definition from Wikipedia. But when we talk about textures (images that we use while rendering 3D graphics), it's not that simple. There are 4 different things we can mean by talking about texture compression, some of them you may not know. In this article, I'd like to give you some basic information about them.

1. Lossless data compression. That's the compression used to shrink binary data in size losing no single bit. We may talk about compression algorithms and libraries that implement them, like popular zlib or LZMA SDK. We may also mean file formats like ZIP or 7Z, which use these algorithms, but also define a way to pack multiple files with their whole directory structure into a single archive file.

Important thing to note here is that we can use this compression for any data. Some file types like text documents or binary executables have to be compressed in a lossless way so that no bits are lost or altered. You can also compress image files this way. Compression ratio depends on the data. The size of the compressed file will be smaller if there are many repeating patterns - the data look pretty boring, like many pixels with the same color. If the data is more varying, each next pixel has even slightly different value, then you may end up with a compressed file as large as original one or even larger. For example, following two images have size 480 x 480. When saved as uncompressed BMP R8G8B8 file, they both take 691,322 bytes. When compressed to a ZIP file, the first one is only 15,993, while the second one is 552,782 bytes.

We can talk about this compression in the context of textures because assets in games are often packed into archives in some custom format which protects the data from modification, speeds up loading, and may also use compression. For example, the new Call of Duty Warzone takes 162 GB of disk space after installation, but it has only 442 files because developers packed the largest data in some archives in files Data/data/data.000, 001 etc., 1 GB each.

2. Lossy compression. These are the algorithms that allow some data loss, but offer higher compression ratios than lossless ones. We use them for specific kinds of data, usually some media - images, sound, and video. For video it's virtually essential, because raw uncompressed data would take enormous space for each second of recording. Algorithms for lossy compression use the knowledge about the structure of the data to remove the information that will be unnoticeable or degrade quality to the lowest degree possible, from the perspective of human perception. We all know them - these are formats like JPEG for images and MP3 for music.

They have their pros and cons. JPEG compresses images in 8x8 blocks using Discrete Fourier Transform (DCT). You can find awesome, in-depth explanation of it on page: Unraveling the JPEG. It's good for natural images, but with text and diagrams it may fail to maintain desired quality. My first example saved as JPEG with Quality = 20% (this is very low, I usually use 90%) takes only 24,753 B, but it looks like this:

GIF is good for such synthetic images, but fails on natural images. I saved my second example as GIF with a color palette of 32 entries. The file is only 90,686 B, but it looks like this (look closer to see dithering used due to a limited number of colors):

Lossy compression is usually accompanied by lossless compression - file formats like JPEG, GIF, MP3, MP4 etc. compress the data losslessly on top of its core algorithm, so there is no point in compressing them again.

3. GPU texture compression. Here comes the interesting part. All formats described so far are designed to optimize data storage and transfer. We need to decompress all the textures packed in ZIP files or saved as JPEG before uploading them to video memory and using for rendering. But there are other types of texture compression formats that can be used by the GPU directly. They are lossy as well, but they work in a different way - they use a fixed number of bytes per block of NxN pixels. Thanks to this, a graphics card can easily pick right block from the memory and uncompress it on the fly, e.g. while sampling the texture. Some of such formats are BC1..7 (which stands for Block Compression) or ASTC (used on mobile platforms). For example, BC7 uses 1 byte per pixel, or 16 bytes per 4x4 block. You can find some overview of these formats here: Understanding BCn Texture Compression Formats.

The only file format I know which supports this compression is DDS, as it allows to store any texture that can be loaded straight to DirectX in various pixel formats, including not only block compressed but also cube, 3D, etc. Most game developers design their own file formats for this purpose anyway, to load them straight into GPU memory with no conversion.

4. Internal GPU texture compression. Pixels of a texture may not be stored in video memory the way you think - row-major order, one pixel after the other, R8G8B8A8 or whatever format you chose. When you create a texture with D3D12_TEXTURE_LAYOUT_UNKNOWN / VK_IMAGE_TILING_OPTIMAL (always do that, except for some very special cases), the GPU is free to use some optimized internal format. This may not be true "compression" by its definition, because it must be lossless, so the memory reserved for the texture will not be smaller. It may even be larger because of the requirement to store additional metadata. (That's why you have to take care of extra VK_IMAGE_ASPECT_METADATA_BIT when working with sparse textures in Vulkan.) The goal of these formats is to speed up access to the texture.

Details of these formats are specific to GPU vendors and may or may not be public. Some ideas of how a GPU could optimize a texture in its memory include:

How to make best use of those internal GPU compression formats if they differ per graphics card vendor and we don't know their details? Just make sure you leave the driver as much optimization opportunities as possible by:

See also article Delta Color Compression Overview at GPUOpen.com.

Summary: As you can see, the term "texture compression" can mean different things, so when talking about anything like this, always make sure to be clear what do you mean unless it's obvious from the context.

Comments | #rendering #vulkan #directx Share

# Vulkan Memory Allocator - budget management

Wed
06
Nov 2019

Querying for memory budget and staying within the budget is a very needed feature of the Vulkan Memory Allocator library. I implemented prototype of it on a separate branch "MemoryBudget".

It also contains documentation of all new symbols and a general chapter "Staying within budget" that describes this topic. Documentation is pregenerated so it can be accessed by just downloading the repository as ZIP, unpacking, and opening file "docs\html\index.html" > chapter “Staying within budget”.

If you are interested, please take a look. Any feedback is welcomed - you can leave your comment below or send me an e-mail. Now is the best time to adjust this feature to users' needs before it gets into the official release of the library.

Long story short:

Update 2019-12-20: This has been merged to master branch and shipped with the latest major release: Vulkan Memory Allocator 2.3.0.

Comments | #vulkan #libraries #productions Share

# Differences in memory management between Direct3D 12 and Vulkan

Fri
26
Jul 2019

Since July 2017 I develop Vulkan Memory Allocator (VMA) – a C++ library that helps with memory management in games and other applications using Vulkan. But because I deal with both Vulkan and DirectX 12 in my everyday work, I think it’s a good idea to compare them.

This is an article about a very specific topic. It may be useful to you if you are a programmer working with both graphics APIs – Direct3D 12 and Vulkan. These two APIs offer a similar set of features and performance. Both are the new generation, explicit, low-level interfaces to the modern graphics hardware (GPUs), so we could compare them back-to-back to show similarities and differences, e.g. in naming things. For example, ID3D12CommandQueue::ExecuteCommandLists function has Vulkan equivalent in form of vkQueueSubmit function. However, this article focuses on just one aspect – memory management, which means the rules and limitation of GPU memory allocation and the creation of resources – images (textures, render targets, depth-stencil surfaces etc.) and buffers (vertex buffers, index buffers, constant/uniform buffers etc.) Chapters below describe pretty much all the aspects of memory management that differ between the two APIs.

Read full article »

Comments | #vulkan #directx #gpu Share

# Vulkan: Long way to access data

Thu
18
Apr 2019

If you want to access some GPU memory in a shader, there are multiple levels of indirection that you need to go through. Understanding them is an important part of learning Vulkan API. Here is an explanation of this whole path.

Let’s take texture sampling as an example. We will start from shader code and go from there back to GPU memory where pixels of the texture are stored. If you write your shaders in GLSL language, you can use texture function to do sampling. You need to provide name of a sampler, as well as texture coordinates.

vec4 sampledColor = texture(sampler1, texCoords);

Earlier in the shader, the sampler needs to be defined. Together with this definition you need to provide index of a slot where this sampler and texture will be bound when the shader executes. Binding resources to slots under different numbers is a concept that exists in various graphics APIs for some time already. In Vulkan there are actually two numbers: index of a descriptor set and index of specific binding in that set. Sampler definition in GLSL may look like this:

layout(set=0, binding=1) uniform sampler2D sampler1;

What you bind to this slot is not the texture itself, but so-called descriptor. Descriptors are grouped into descriptor sets – objects of type VkDescriptorSet. They are allocated out of VkDescriptorPool (which we ignore here for simplicity) and they must comply with some VkDescriptorSetLayout. When defining layout of a descriptor set, you may specify that binding number 1 will contain combined image sampler. (This is just an example way of doing this. There are other possibilities, like descriptors of type: sampled image, sampler, storage image etc.)

VkDescriptorSetLayoutBinding binding1 = {};
binding1.binding = 1;
binding1.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
binding1.descriptorCount = 1;
binding1.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT;

VkDescriptorSetLayoutCreateInfo layoutInfo = {
    VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO };
layoutInfo.bindingCount = 1;
layoutInfo.pBindings = &binding1;

VkDescriptorSetLayout descriptorSetLayout1;
vkCreateDescriptorSetLayout(device, &layoutInfo, nullptr, &descriptorSetLayout1);

VkDescriptorSetAllocateInfo setAllocInfo = {
    VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO };
setAllocInfo.descriptorPool = descriptorPool1; // You need to have that already.
setAllocInfo.descriptorSetCount = 1;
setAllocInfo.pSetLayouts = &descriptorSetLayout1;

VkDescriptorSet descriptorSet1;
vkAllocateDescriptorSets(device, &setAllocInfo, &descriptorSet1);

When you have descriptor set layout created, as well as descriptor set based on it allocated, you need to bind the descriptor set as current one under set index 0 in the command buffer that you fill before you can issue a draw call that will use our shader. Function vkCmdBindDescriptorSets is defined for this purpose:

vkCmdBindDescriptorSets(
    commandBuffer1,
    VK_PIPELINE_BIND_POINT_GRAPHICS,
    descriptorSetLayout1,
    0, // firstSet
    1, // descriptorSetCount
    &descriptorSet1,
    0, // dynamicOffsetCount
    nullptr); // pDynamicOffsets

How do you setup the descriptor to point to a specific texture? There are multiple ways to do that. The most basic one is to use vkUpdateDescriptorSets function:

VkDescriptorImageInfo imageInfo = {};
imageInfo.sampler = sampler1;
imageInfo.imageView = imageView1;
imageInfo.imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;

VkWriteDescriptorSet descriptorWrite = {
    VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET };
descriptorWrite.dstSet = descriptorSet1;
descriptorWrite.dstBinding = 1;
descriptorWrite.descriptorCount = 1;
descriptorWrite.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
descriptorWrite.pImageInfo = &imageInfo;

vkUpdateDescriptorSets(
    device,
    1, // descriptorWriteCount
    &descriptorWrite, // pDescriptorWrites
    0, // descriptorCopyCount
    nullptr); // pDescriptorCopies

Please note that this function doesn’t record a command to any command buffer. Descriptor update happens immediately. That’s why you need to do it before you submit your command buffer for execution on GPU and you need to keep this descriptor set alive and unchanged until the command buffer finishes execution.

There are other ways to update a descriptor set. You can e.g. use last two parameters of vkUpdateDescriptorSets function to copy descriptors (which is not recommended for performance reasons), as well as to use some extensions, e.g.: VK_KHR_push_descriptor, VK_KHR_descriptor_update_template.

What we write as value of the descriptor is reference to objects: imageView1 and sampler1. Let’s ignore the sampler and just focus on imageView1. This is an object of type VkImageView. Like in Direct3D 11, an image view is a simple object that encapsulates reference to an image along with a set of additional parameters that let you “view” the image in a certain way, e.g. limit access to a range of mipmap levels, array layers, or reinterpret it as different format.

VkImageViewCreateInfo viewInfo = {
    VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO };
viewInfo.image = image1;
viewInfo.viewType = VK_IMAGE_VIEW_TYPE_2D;
viewInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
viewInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
viewInfo.subresourceRange.baseMipLevel = 0;
viewInfo.subresourceRange.levelCount = 1;
viewInfo.subresourceRange.baseArrayLayer = 0;
viewInfo.subresourceRange.layerCount = 1;

VkImageView imageView1;
vkCreateImageView(device, &viewInfo, nullptr, &imageView1);

As you can see, image view object holds reference to image1. This is an object of type VkImage that represents actual resource, commonly called “texture” in other APIs. It is created from a rich set of parameters, like width, height, pixel format, number of mipmap levels etc.

VkImageCreateInfo imageInfo = {
    VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
imageInfo.imageType = VK_IMAGE_TYPE_2D;
imageInfo.extent.width = 1024;
imageInfo.extent.height = 1024;
imageInfo.depth = 1;
imageInfo.mipLevels = 1;
imageInfo.arrayLayers = 1;
imageInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
imageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
imageInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
iamgeInfo.usage =
    VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT;
imageInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
imageInfo.samples = VK_SAMPLE_COUNT_1_BIT;

VkImage image1;
vkCreateImage(device, &imageInfo, nullptr, &image1);

It’s not all yet. Unlike previous generation graphics APIs (Direct3D 9 or 11, OpenGL), image or buffer object doesn’t automatically allocate backing memory for its data. You need to do it on your own. What you actually need to do is to first query the image for memory requirements (required size and alignment), then allocate memory block for it and finally bind those two together. Only then the image is usable as a means of accessing the memory, interpreted as colorful pixels of a 2D picture.

VkMemoryRequirements memReq;
vkGetImageMemoryRequirements(device, image1, &memReq);

VkMemoryAllocateInfo allocInfo = {
    VK_STRUTURE_TYPE_MEMORY_ALLOCATE_INFO };
allocInfo.allocationSize = memReq.size;
allocInfo.memoryTypeIndex = 0; // You need to find appropriate index!

VkDeviceMemory memory1;
vkAllocateMemory(device, &allocInfo, nullptr, &memory1);

vkBindImageMemory(
    device,
    image1,
    memory1,
    0); // memoryOffset

In production quality code you should of course check for error codes, e.g. when your allocation fails because you run out of memory. You should also avoid allocating separate memory blocks for each of your images and buffers. It is necessary to allocate bigger memory blocks and manage them manually, assigning parts of them to your resources. You can use last parameter of the binding function to provide offset in bytes from the start of a memory block. You can also simplify this part by using existing library: Vulkan Memory Allocator.

Comments | #graphics #vulkan Share

# Vulkan Memory Allocator Survey March 2019

Mon
04
Mar 2019

Are you a software developer, use Vulkan and the Vulkan Memory Allocator library (or at least considered using it)? If so, please spend a few minutes and help to shape the future of the library by participating in the survey:

» Vulkan Memory Allocator Survey March 2019

Your feedback is greatly appreciated. The survey is anonymous - no personal data is collected like name, e-mail etc. All questions are optional.

Comments | #productions #libraries #vulkan Share

Pages: 1 2 3 4 >

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2020