1//
2// Copyright (c) 2017-2025 Advanced Micro Devices, Inc. All rights reserved.
3//
4// Permission is hereby granted, free of charge, to any person obtaining a copy
5// of this software and associated documentation files (the "Software"), to deal
6// in the Software without restriction, including without limitation the rights
7// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8// copies of the Software, and to permit persons to whom the Software is
9// furnished to do so, subject to the following conditions:
10//
11// The above copyright notice and this permission notice shall be included in
12// all copies or substantial portions of the Software.
13//
14// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
20// THE SOFTWARE.
21//
22
23#ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
24#define AMD_VULKAN_MEMORY_ALLOCATOR_H
25
26/** \mainpage Vulkan Memory Allocator
27
28<b>Version 3.2.1</b>
29
30Copyright (c) 2017-2025 Advanced Micro Devices, Inc. All rights reserved. \n
31License: MIT \n
32See also: [product page on GPUOpen](https://gpuopen.com/gaming-product/vulkan-memory-allocator/),
33[repository on GitHub](https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator)
34
35
36<b>API documentation divided into groups:</b> [Topics](topics.html)
37
38<b>General documentation chapters:</b>
39
40- <b>User guide</b>
41 - \subpage quick_start
42 - [Project setup](@ref quick_start_project_setup)
43 - [Initialization](@ref quick_start_initialization)
44 - [Resource allocation](@ref quick_start_resource_allocation)
45 - \subpage choosing_memory_type
46 - [Usage](@ref choosing_memory_type_usage)
47 - [Required and preferred flags](@ref choosing_memory_type_required_preferred_flags)
48 - [Explicit memory types](@ref choosing_memory_type_explicit_memory_types)
49 - [Custom memory pools](@ref choosing_memory_type_custom_memory_pools)
50 - [Dedicated allocations](@ref choosing_memory_type_dedicated_allocations)
51 - \subpage memory_mapping
52 - [Copy functions](@ref memory_mapping_copy_functions)
53 - [Mapping functions](@ref memory_mapping_mapping_functions)
54 - [Persistently mapped memory](@ref memory_mapping_persistently_mapped_memory)
55 - [Cache flush and invalidate](@ref memory_mapping_cache_control)
56 - \subpage staying_within_budget
57 - [Querying for budget](@ref staying_within_budget_querying_for_budget)
58 - [Controlling memory usage](@ref staying_within_budget_controlling_memory_usage)
59 - \subpage resource_aliasing
60 - \subpage custom_memory_pools
61 - [Choosing memory type index](@ref custom_memory_pools_MemTypeIndex)
62 - [When not to use custom pools](@ref custom_memory_pools_when_not_use)
63 - [Linear allocation algorithm](@ref linear_algorithm)
64 - [Free-at-once](@ref linear_algorithm_free_at_once)
65 - [Stack](@ref linear_algorithm_stack)
66 - [Double stack](@ref linear_algorithm_double_stack)
67 - [Ring buffer](@ref linear_algorithm_ring_buffer)
68 - \subpage defragmentation
69 - \subpage statistics
70 - [Numeric statistics](@ref statistics_numeric_statistics)
71 - [JSON dump](@ref statistics_json_dump)
72 - \subpage allocation_annotation
73 - [Allocation user data](@ref allocation_user_data)
74 - [Allocation names](@ref allocation_names)
75 - \subpage virtual_allocator
76 - \subpage debugging_memory_usage
77 - [Memory initialization](@ref debugging_memory_usage_initialization)
78 - [Margins](@ref debugging_memory_usage_margins)
79 - [Corruption detection](@ref debugging_memory_usage_corruption_detection)
80 - [Leak detection features](@ref debugging_memory_usage_leak_detection)
81 - \subpage other_api_interop
82- \subpage usage_patterns
83 - [GPU-only resource](@ref usage_patterns_gpu_only)
84 - [Staging copy for upload](@ref usage_patterns_staging_copy_upload)
85 - [Readback](@ref usage_patterns_readback)
86 - [Advanced data uploading](@ref usage_patterns_advanced_data_uploading)
87 - [Other use cases](@ref usage_patterns_other_use_cases)
88- \subpage configuration
89 - [Pointers to Vulkan functions](@ref config_Vulkan_functions)
90 - [Custom host memory allocator](@ref custom_memory_allocator)
91 - [Device memory allocation callbacks](@ref allocation_callbacks)
92 - [Device heap memory limit](@ref heap_memory_limit)
93- <b>Extension support</b>
94 - \subpage vk_khr_dedicated_allocation
95 - \subpage enabling_buffer_device_address
96 - \subpage vk_ext_memory_priority
97 - \subpage vk_amd_device_coherent_memory
98 - \subpage vk_khr_external_memory_win32
99- \subpage general_considerations
100 - [Thread safety](@ref general_considerations_thread_safety)
101 - [Versioning and compatibility](@ref general_considerations_versioning_and_compatibility)
102 - [Validation layer warnings](@ref general_considerations_validation_layer_warnings)
103 - [Allocation algorithm](@ref general_considerations_allocation_algorithm)
104 - [Features not supported](@ref general_considerations_features_not_supported)
105
106\defgroup group_init Library initialization
107
108\brief API elements related to the initialization and management of the entire library, especially #VmaAllocator object.
109
110\defgroup group_alloc Memory allocation
111
112\brief API elements related to the allocation, deallocation, and management of Vulkan memory, buffers, images.
113Most basic ones being: vmaCreateBuffer(), vmaCreateImage().
114
115\defgroup group_virtual Virtual allocator
116
117\brief API elements related to the mechanism of \ref virtual_allocator - using the core allocation algorithm
118for user-defined purpose without allocating any real GPU memory.
119
120\defgroup group_stats Statistics
121
122\brief API elements that query current status of the allocator, from memory usage, budget, to full dump of the internal state in JSON format.
123See documentation chapter: \ref statistics.
124*/
125
126
127#ifdef __cplusplus
128extern "C" {
129#endif
130
131#if !defined(VULKAN_H_)
132#include <vulkan/vulkan.h>
133#endif
134
135#if !defined(VMA_VULKAN_VERSION)
136 #if defined(VK_VERSION_1_4)
137 #define VMA_VULKAN_VERSION 1004000
138 #elif defined(VK_VERSION_1_3)
139 #define VMA_VULKAN_VERSION 1003000
140 #elif defined(VK_VERSION_1_2)
141 #define VMA_VULKAN_VERSION 1002000
142 #elif defined(VK_VERSION_1_1)
143 #define VMA_VULKAN_VERSION 1001000
144 #else
145 #define VMA_VULKAN_VERSION 1000000
146 #endif
147#endif
148
149#if defined(__ANDROID__) && defined(VK_NO_PROTOTYPES) && VMA_STATIC_VULKAN_FUNCTIONS
150 extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
151 extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
152 extern PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
153 extern PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
154 extern PFN_vkAllocateMemory vkAllocateMemory;
155 extern PFN_vkFreeMemory vkFreeMemory;
156 extern PFN_vkMapMemory vkMapMemory;
157 extern PFN_vkUnmapMemory vkUnmapMemory;
158 extern PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
159 extern PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
160 extern PFN_vkBindBufferMemory vkBindBufferMemory;
161 extern PFN_vkBindImageMemory vkBindImageMemory;
162 extern PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
163 extern PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
164 extern PFN_vkCreateBuffer vkCreateBuffer;
165 extern PFN_vkDestroyBuffer vkDestroyBuffer;
166 extern PFN_vkCreateImage vkCreateImage;
167 extern PFN_vkDestroyImage vkDestroyImage;
168 extern PFN_vkCmdCopyBuffer vkCmdCopyBuffer;
169 #if VMA_VULKAN_VERSION >= 1001000
170 extern PFN_vkGetBufferMemoryRequirements2 vkGetBufferMemoryRequirements2;
171 extern PFN_vkGetImageMemoryRequirements2 vkGetImageMemoryRequirements2;
172 extern PFN_vkBindBufferMemory2 vkBindBufferMemory2;
173 extern PFN_vkBindImageMemory2 vkBindImageMemory2;
174 extern PFN_vkGetPhysicalDeviceMemoryProperties2 vkGetPhysicalDeviceMemoryProperties2;
175 #endif // #if VMA_VULKAN_VERSION >= 1001000
176#endif // #if defined(__ANDROID__) && VMA_STATIC_VULKAN_FUNCTIONS && VK_NO_PROTOTYPES
177
178#if !defined(VMA_DEDICATED_ALLOCATION)
179 #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
180 #define VMA_DEDICATED_ALLOCATION 1
181 #else
182 #define VMA_DEDICATED_ALLOCATION 0
183 #endif
184#endif
185
186#if !defined(VMA_BIND_MEMORY2)
187 #if VK_KHR_bind_memory2
188 #define VMA_BIND_MEMORY2 1
189 #else
190 #define VMA_BIND_MEMORY2 0
191 #endif
192#endif
193
194#if !defined(VMA_MEMORY_BUDGET)
195 #if VK_EXT_memory_budget && (VK_KHR_get_physical_device_properties2 || VMA_VULKAN_VERSION >= 1001000)
196 #define VMA_MEMORY_BUDGET 1
197 #else
198 #define VMA_MEMORY_BUDGET 0
199 #endif
200#endif
201
202// Defined to 1 when VK_KHR_buffer_device_address device extension or equivalent core Vulkan 1.2 feature is defined in its headers.
203#if !defined(VMA_BUFFER_DEVICE_ADDRESS)
204 #if VK_KHR_buffer_device_address || VMA_VULKAN_VERSION >= 1002000
205 #define VMA_BUFFER_DEVICE_ADDRESS 1
206 #else
207 #define VMA_BUFFER_DEVICE_ADDRESS 0
208 #endif
209#endif
210
211// Defined to 1 when VK_EXT_memory_priority device extension is defined in Vulkan headers.
212#if !defined(VMA_MEMORY_PRIORITY)
213 #if VK_EXT_memory_priority
214 #define VMA_MEMORY_PRIORITY 1
215 #else
216 #define VMA_MEMORY_PRIORITY 0
217 #endif
218#endif
219
220// Defined to 1 when VK_KHR_maintenance4 device extension is defined in Vulkan headers.
221#if !defined(VMA_KHR_MAINTENANCE4)
222 #if VK_KHR_maintenance4
223 #define VMA_KHR_MAINTENANCE4 1
224 #else
225 #define VMA_KHR_MAINTENANCE4 0
226 #endif
227#endif
228
229// Defined to 1 when VK_KHR_maintenance5 device extension is defined in Vulkan headers.
230#if !defined(VMA_KHR_MAINTENANCE5)
231 #if VK_KHR_maintenance5
232 #define VMA_KHR_MAINTENANCE5 1
233 #else
234 #define VMA_KHR_MAINTENANCE5 0
235 #endif
236#endif
237
238
239// Defined to 1 when VK_KHR_external_memory device extension is defined in Vulkan headers.
240#if !defined(VMA_EXTERNAL_MEMORY)
241 #if VK_KHR_external_memory
242 #define VMA_EXTERNAL_MEMORY 1
243 #else
244 #define VMA_EXTERNAL_MEMORY 0
245 #endif
246#endif
247
248// Defined to 1 when VK_KHR_external_memory_win32 device extension is defined in Vulkan headers.
249#if !defined(VMA_EXTERNAL_MEMORY_WIN32)
250 #if VK_KHR_external_memory_win32
251 #define VMA_EXTERNAL_MEMORY_WIN32 1
252 #else
253 #define VMA_EXTERNAL_MEMORY_WIN32 0
254 #endif
255#endif
256
257// Define these macros to decorate all public functions with additional code,
258// before and after returned type, appropriately. This may be useful for
259// exporting the functions when compiling VMA as a separate library. Example:
260// #define VMA_CALL_PRE __declspec(dllexport)
261// #define VMA_CALL_POST __cdecl
262#ifndef VMA_CALL_PRE
263 #define VMA_CALL_PRE
264#endif
265#ifndef VMA_CALL_POST
266 #define VMA_CALL_POST
267#endif
268
269// Define this macro to decorate pNext pointers with an attribute specifying the Vulkan
270// structure that will be extended via the pNext chain.
271#ifndef VMA_EXTENDS_VK_STRUCT
272 #define VMA_EXTENDS_VK_STRUCT(vkStruct)
273#endif
274
275// Define this macro to decorate pointers with an attribute specifying the
276// length of the array they point to if they are not null.
277//
278// The length may be one of
279// - The name of another parameter in the argument list where the pointer is declared
280// - The name of another member in the struct where the pointer is declared
281// - The name of a member of a struct type, meaning the value of that member in
282// the context of the call. For example
283// VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount"),
284// this means the number of memory heaps available in the device associated
285// with the VmaAllocator being dealt with.
286#ifndef VMA_LEN_IF_NOT_NULL
287 #define VMA_LEN_IF_NOT_NULL(len)
288#endif
289
290// The VMA_NULLABLE macro is defined to be _Nullable when compiling with Clang.
291// see: https://clang.llvm.org/docs/AttributeReference.html#nullable
292#ifndef VMA_NULLABLE
293 #ifdef __clang__
294 #define VMA_NULLABLE _Nullable
295 #else
296 #define VMA_NULLABLE
297 #endif
298#endif
299
300// The VMA_NOT_NULL macro is defined to be _Nonnull when compiling with Clang.
301// see: https://clang.llvm.org/docs/AttributeReference.html#nonnull
302#ifndef VMA_NOT_NULL
303 #ifdef __clang__
304 #define VMA_NOT_NULL _Nonnull
305 #else
306 #define VMA_NOT_NULL
307 #endif
308#endif
309
310// If non-dispatchable handles are represented as pointers then we can give
311// then nullability annotations
312#ifndef VMA_NOT_NULL_NON_DISPATCHABLE
313 #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
314 #define VMA_NOT_NULL_NON_DISPATCHABLE VMA_NOT_NULL
315 #else
316 #define VMA_NOT_NULL_NON_DISPATCHABLE
317 #endif
318#endif
319
320#ifndef VMA_NULLABLE_NON_DISPATCHABLE
321 #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
322 #define VMA_NULLABLE_NON_DISPATCHABLE VMA_NULLABLE
323 #else
324 #define VMA_NULLABLE_NON_DISPATCHABLE
325 #endif
326#endif
327
328#ifndef VMA_STATS_STRING_ENABLED
329 #define VMA_STATS_STRING_ENABLED 1
330#endif
331
332////////////////////////////////////////////////////////////////////////////////
333////////////////////////////////////////////////////////////////////////////////
334//
335// INTERFACE
336//
337////////////////////////////////////////////////////////////////////////////////
338////////////////////////////////////////////////////////////////////////////////
339
340// Sections for managing code placement in file, only for development purposes e.g. for convenient folding inside an IDE.
341#ifndef _VMA_ENUM_DECLARATIONS
342
343/**
344\addtogroup group_init
345@{
346*/
347
348/// Flags for created #VmaAllocator.
349typedef enum VmaAllocatorCreateFlagBits
350{
351 /** \brief Allocator and all objects created from it will not be synchronized internally, so you must guarantee they are used from only one thread at a time or synchronized externally by you.
352
353 Using this flag may increase performance because internal mutexes are not used.
354 */
355 VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT = 0x00000001,
356 /** \brief Enables usage of VK_KHR_dedicated_allocation extension.
357
358 The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
359 When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
360
361 Using this extension will automatically allocate dedicated blocks of memory for
362 some buffers and images instead of suballocating place for them out of bigger
363 memory blocks (as if you explicitly used #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT
364 flag) when it is recommended by the driver. It may improve performance on some
365 GPUs.
366
367 You may set this flag only if you found out that following device extensions are
368 supported, you enabled them while creating Vulkan device passed as
369 VmaAllocatorCreateInfo::device, and you want them to be used internally by this
370 library:
371
372 - VK_KHR_get_memory_requirements2 (device extension)
373 - VK_KHR_dedicated_allocation (device extension)
374
375 When this flag is set, you can experience following warnings reported by Vulkan
376 validation layer. You can ignore them.
377
378 > vkBindBufferMemory(): Binding memory to buffer 0x2d but vkGetBufferMemoryRequirements() has not been called on that buffer.
379 */
380 VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT = 0x00000002,
381 /**
382 Enables usage of VK_KHR_bind_memory2 extension.
383
384 The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
385 When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
386
387 You may set this flag only if you found out that this device extension is supported,
388 you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
389 and you want it to be used internally by this library.
390
391 The extension provides functions `vkBindBufferMemory2KHR` and `vkBindImageMemory2KHR`,
392 which allow to pass a chain of `pNext` structures while binding.
393 This flag is required if you use `pNext` parameter in vmaBindBufferMemory2() or vmaBindImageMemory2().
394 */
395 VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT = 0x00000004,
396 /**
397 Enables usage of VK_EXT_memory_budget extension.
398
399 You may set this flag only if you found out that this device extension is supported,
400 you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
401 and you want it to be used internally by this library, along with another instance extension
402 VK_KHR_get_physical_device_properties2, which is required by it (or Vulkan 1.1, where this extension is promoted).
403
404 The extension provides query for current memory usage and budget, which will probably
405 be more accurate than an estimation used by the library otherwise.
406 */
407 VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT = 0x00000008,
408 /**
409 Enables usage of VK_AMD_device_coherent_memory extension.
410
411 You may set this flag only if you:
412
413 - found out that this device extension is supported and enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
414 - checked that `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true and set it while creating the Vulkan device,
415 - want it to be used internally by this library.
416
417 The extension and accompanying device feature provide access to memory types with
418 `VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flags.
419 They are useful mostly for writing breadcrumb markers - a common method for debugging GPU crash/hang/TDR.
420
421 When the extension is not enabled, such memory types are still enumerated, but their usage is illegal.
422 To protect from this error, if you don't create the allocator with this flag, it will refuse to allocate any memory or create a custom pool in such memory type,
423 returning `VK_ERROR_FEATURE_NOT_PRESENT`.
424 */
425 VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT = 0x00000010,
426 /**
427 Enables usage of "buffer device address" feature, which allows you to use function
428 `vkGetBufferDeviceAddress*` to get raw GPU pointer to a buffer and pass it for usage inside a shader.
429
430 You may set this flag only if you:
431
432 1. (For Vulkan version < 1.2) Found as available and enabled device extension
433 VK_KHR_buffer_device_address.
434 This extension is promoted to core Vulkan 1.2.
435 2. Found as available and enabled device feature `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress`.
436
437 When this flag is set, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT` using VMA.
438 The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT` to
439 allocated memory blocks wherever it might be needed.
440
441 For more information, see documentation chapter \ref enabling_buffer_device_address.
442 */
443 VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT = 0x00000020,
444 /**
445 Enables usage of VK_EXT_memory_priority extension in the library.
446
447 You may set this flag only if you found available and enabled this device extension,
448 along with `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority == VK_TRUE`,
449 while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
450
451 When this flag is used, VmaAllocationCreateInfo::priority and VmaPoolCreateInfo::priority
452 are used to set priorities of allocated Vulkan memory. Without it, these variables are ignored.
453
454 A priority must be a floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
455 Larger values are higher priority. The granularity of the priorities is implementation-dependent.
456 It is automatically passed to every call to `vkAllocateMemory` done by the library using structure `VkMemoryPriorityAllocateInfoEXT`.
457 The value to be used for default priority is 0.5.
458 For more details, see the documentation of the VK_EXT_memory_priority extension.
459 */
460 VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT = 0x00000040,
461 /**
462 Enables usage of VK_KHR_maintenance4 extension in the library.
463
464 You may set this flag only if you found available and enabled this device extension,
465 while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
466 */
467 VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT = 0x00000080,
468 /**
469 Enables usage of VK_KHR_maintenance5 extension in the library.
470
471 You should set this flag if you found available and enabled this device extension,
472 while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
473 */
474 VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT = 0x00000100,
475
476 /**
477 Enables usage of VK_KHR_external_memory_win32 extension in the library.
478
479 You should set this flag if you found available and enabled this device extension,
480 while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
481 For more information, see \ref vk_khr_external_memory_win32.
482 */
483 VMA_ALLOCATOR_CREATE_KHR_EXTERNAL_MEMORY_WIN32_BIT = 0x00000200,
484
485 VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
486} VmaAllocatorCreateFlagBits;
487/// See #VmaAllocatorCreateFlagBits.
488typedef VkFlags VmaAllocatorCreateFlags;
489
490/** @} */
491
492/**
493\addtogroup group_alloc
494@{
495*/
496
497/// \brief Intended usage of the allocated memory.
498typedef enum VmaMemoryUsage
499{
500 /** No intended memory usage specified.
501 Use other members of VmaAllocationCreateInfo to specify your requirements.
502 */
503 VMA_MEMORY_USAGE_UNKNOWN = 0,
504 /**
505 \deprecated Obsolete, preserved for backward compatibility.
506 Prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
507 */
508 VMA_MEMORY_USAGE_GPU_ONLY = 1,
509 /**
510 \deprecated Obsolete, preserved for backward compatibility.
511 Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` and `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT`.
512 */
513 VMA_MEMORY_USAGE_CPU_ONLY = 2,
514 /**
515 \deprecated Obsolete, preserved for backward compatibility.
516 Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
517 */
518 VMA_MEMORY_USAGE_CPU_TO_GPU = 3,
519 /**
520 \deprecated Obsolete, preserved for backward compatibility.
521 Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
522 */
523 VMA_MEMORY_USAGE_GPU_TO_CPU = 4,
524 /**
525 \deprecated Obsolete, preserved for backward compatibility.
526 Prefers not `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
527 */
528 VMA_MEMORY_USAGE_CPU_COPY = 5,
529 /**
530 Lazily allocated GPU memory having `VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT`.
531 Exists mostly on mobile platforms. Using it on desktop PC or other GPUs with no such memory type present will fail the allocation.
532
533 Usage: Memory for transient attachment images (color attachments, depth attachments etc.), created with `VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT`.
534
535 Allocations with this usage are always created as dedicated - it implies #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
536 */
537 VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED = 6,
538 /**
539 Selects best memory type automatically.
540 This flag is recommended for most common use cases.
541
542 When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
543 you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
544 in VmaAllocationCreateInfo::flags.
545
546 It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
547 vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
548 and not with generic memory allocation functions.
549 */
550 VMA_MEMORY_USAGE_AUTO = 7,
551 /**
552 Selects best memory type automatically with preference for GPU (device) memory.
553
554 When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
555 you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
556 in VmaAllocationCreateInfo::flags.
557
558 It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
559 vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
560 and not with generic memory allocation functions.
561 */
562 VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE = 8,
563 /**
564 Selects best memory type automatically with preference for CPU (host) memory.
565
566 When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
567 you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
568 in VmaAllocationCreateInfo::flags.
569
570 It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
571 vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
572 and not with generic memory allocation functions.
573 */
574 VMA_MEMORY_USAGE_AUTO_PREFER_HOST = 9,
575
576 VMA_MEMORY_USAGE_MAX_ENUM = 0x7FFFFFFF
577} VmaMemoryUsage;
578
579/// Flags to be passed as VmaAllocationCreateInfo::flags.
580typedef enum VmaAllocationCreateFlagBits
581{
582 /** \brief Set this flag if the allocation should have its own memory block.
583
584 Use it for special, big resources, like fullscreen images used as attachments.
585
586 If you use this flag while creating a buffer or an image, `VkMemoryDedicatedAllocateInfo`
587 structure is applied if possible.
588 */
589 VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT = 0x00000001,
590
591 /** \brief Set this flag to only try to allocate from existing `VkDeviceMemory` blocks and never create new such block.
592
593 If new allocation cannot be placed in any of the existing blocks, allocation
594 fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
595
596 You should not use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT and
597 #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT at the same time. It makes no sense.
598 */
599 VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT = 0x00000002,
600 /** \brief Set this flag to use a memory that will be persistently mapped and retrieve pointer to it.
601
602 Pointer to mapped memory will be returned through VmaAllocationInfo::pMappedData.
603
604 It is valid to use this flag for allocation made from memory type that is not
605 `HOST_VISIBLE`. This flag is then ignored and memory is not mapped. This is
606 useful if you need an allocation that is efficient to use on GPU
607 (`DEVICE_LOCAL`) and still want to map it directly if possible on platforms that
608 support it (e.g. Intel GPU).
609 */
610 VMA_ALLOCATION_CREATE_MAPPED_BIT = 0x00000004,
611 /** \deprecated Preserved for backward compatibility. Consider using vmaSetAllocationName() instead.
612
613 Set this flag to treat VmaAllocationCreateInfo::pUserData as pointer to a
614 null-terminated string. Instead of copying pointer value, a local copy of the
615 string is made and stored in allocation's `pName`. The string is automatically
616 freed together with the allocation. It is also used in vmaBuildStatsString().
617 */
618 VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT = 0x00000020,
619 /** Allocation will be created from upper stack in a double stack pool.
620
621 This flag is only allowed for custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT flag.
622 */
623 VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = 0x00000040,
624 /** Create both buffer/image and allocation, but don't bind them together.
625 It is useful when you want to bind yourself to do some more advanced binding, e.g. using some extensions.
626 The flag is meaningful only with functions that bind by default: vmaCreateBuffer(), vmaCreateImage().
627 Otherwise it is ignored.
628
629 If you want to make sure the new buffer/image is not tied to the new memory allocation
630 through `VkMemoryDedicatedAllocateInfoKHR` structure in case the allocation ends up in its own memory block,
631 use also flag #VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT.
632 */
633 VMA_ALLOCATION_CREATE_DONT_BIND_BIT = 0x00000080,
634 /** Create allocation only if additional device memory required for it, if any, won't exceed
635 memory budget. Otherwise return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
636 */
637 VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT = 0x00000100,
638 /** \brief Set this flag if the allocated memory will have aliasing resources.
639
640 Usage of this flag prevents supplying `VkMemoryDedicatedAllocateInfoKHR` when #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT is specified.
641 Otherwise created dedicated memory will not be suitable for aliasing resources, resulting in Vulkan Validation Layer errors.
642 */
643 VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT = 0x00000200,
644 /**
645 Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
646
647 - If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
648 you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
649 - If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
650 This includes allocations created in \ref custom_memory_pools.
651
652 Declares that mapped memory will only be written sequentially, e.g. using `memcpy()` or a loop writing number-by-number,
653 never read or accessed randomly, so a memory type can be selected that is uncached and write-combined.
654
655 \warning Violating this declaration may work correctly, but will likely be very slow.
656 Watch out for implicit reads introduced by doing e.g. `pMappedData[i] += x;`
657 Better prepare your data in a local variable and `memcpy()` it to the mapped pointer all at once.
658 */
659 VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT = 0x00000400,
660 /**
661 Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
662
663 - If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
664 you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
665 - If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
666 This includes allocations created in \ref custom_memory_pools.
667
668 Declares that mapped memory can be read, written, and accessed in random order,
669 so a `HOST_CACHED` memory type is preferred.
670 */
671 VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT = 0x00000800,
672 /**
673 Together with #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT,
674 it says that despite request for host access, a not-`HOST_VISIBLE` memory type can be selected
675 if it may improve performance.
676
677 By using this flag, you declare that you will check if the allocation ended up in a `HOST_VISIBLE` memory type
678 (e.g. using vmaGetAllocationMemoryProperties()) and if not, you will create some "staging" buffer and
679 issue an explicit transfer to write/read your data.
680 To prepare for this possibility, don't forget to add appropriate flags like
681 `VK_BUFFER_USAGE_TRANSFER_DST_BIT`, `VK_BUFFER_USAGE_TRANSFER_SRC_BIT` to the parameters of created buffer or image.
682 */
683 VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT = 0x00001000,
684 /** Allocation strategy that chooses smallest possible free range for the allocation
685 to minimize memory usage and fragmentation, possibly at the expense of allocation time.
686 */
687 VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = 0x00010000,
688 /** Allocation strategy that chooses first suitable free range for the allocation -
689 not necessarily in terms of the smallest offset but the one that is easiest and fastest to find
690 to minimize allocation time, possibly at the expense of allocation quality.
691 */
692 VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = 0x00020000,
693 /** Allocation strategy that chooses always the lowest offset in available space.
694 This is not the most efficient strategy but achieves highly packed data.
695 Used internally by defragmentation, not recommended in typical usage.
696 */
697 VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT = 0x00040000,
698 /** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT.
699 */
700 VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
701 /** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT.
702 */
703 VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
704 /** A bit mask to extract only `STRATEGY` bits from entire set of flags.
705 */
706 VMA_ALLOCATION_CREATE_STRATEGY_MASK =
707 VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT |
708 VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT |
709 VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
710
711 VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
712} VmaAllocationCreateFlagBits;
713/// See #VmaAllocationCreateFlagBits.
714typedef VkFlags VmaAllocationCreateFlags;
715
716/// Flags to be passed as VmaPoolCreateInfo::flags.
717typedef enum VmaPoolCreateFlagBits
718{
719 /** \brief Use this flag if you always allocate only buffers and linear images or only optimal images out of this pool and so Buffer-Image Granularity can be ignored.
720
721 This is an optional optimization flag.
722
723 If you always allocate using vmaCreateBuffer(), vmaCreateImage(),
724 vmaAllocateMemoryForBuffer(), then you don't need to use it because allocator
725 knows exact type of your allocations so it can handle Buffer-Image Granularity
726 in the optimal way.
727
728 If you also allocate using vmaAllocateMemoryForImage() or vmaAllocateMemory(),
729 exact type of such allocations is not known, so allocator must be conservative
730 in handling Buffer-Image Granularity, which can lead to suboptimal allocation
731 (wasted memory). In that case, if you can make sure you always allocate only
732 buffers and linear images or only optimal images out of this pool, use this flag
733 to make allocator disregard Buffer-Image Granularity and so make allocations
734 faster and more optimal.
735 */
736 VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT = 0x00000002,
737
738 /** \brief Enables alternative, linear allocation algorithm in this pool.
739
740 Specify this flag to enable linear allocation algorithm, which always creates
741 new allocations after last one and doesn't reuse space from allocations freed in
742 between. It trades memory consumption for simplified algorithm and data
743 structure, which has better performance and uses less memory for metadata.
744
745 By using this flag, you can achieve behavior of free-at-once, stack,
746 ring buffer, and double stack.
747 For details, see documentation chapter \ref linear_algorithm.
748 */
749 VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT = 0x00000004,
750
751 /** Bit mask to extract only `ALGORITHM` bits from entire set of flags.
752 */
753 VMA_POOL_CREATE_ALGORITHM_MASK =
754 VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT,
755
756 VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
757} VmaPoolCreateFlagBits;
758/// Flags to be passed as VmaPoolCreateInfo::flags. See #VmaPoolCreateFlagBits.
759typedef VkFlags VmaPoolCreateFlags;
760
761/// Flags to be passed as VmaDefragmentationInfo::flags.
762typedef enum VmaDefragmentationFlagBits
763{
764 /* \brief Use simple but fast algorithm for defragmentation.
765 May not achieve best results but will require least time to compute and least allocations to copy.
766 */
767 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT = 0x1,
768 /* \brief Default defragmentation algorithm, applied also when no `ALGORITHM` flag is specified.
769 Offers a balance between defragmentation quality and the amount of allocations and bytes that need to be moved.
770 */
771 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT = 0x2,
772 /* \brief Perform full defragmentation of memory.
773 Can result in notably more time to compute and allocations to copy, but will achieve best memory packing.
774 */
775 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT = 0x4,
776 /** \brief Use the most roboust algorithm at the cost of time to compute and number of copies to make.
777 Only available when bufferImageGranularity is greater than 1, since it aims to reduce
778 alignment issues between different types of resources.
779 Otherwise falls back to same behavior as #VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT.
780 */
781 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT = 0x8,
782
783 /// A bit mask to extract only `ALGORITHM` bits from entire set of flags.
784 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK =
785 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT |
786 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT |
787 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT |
788 VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT,
789
790 VMA_DEFRAGMENTATION_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
791} VmaDefragmentationFlagBits;
792/// See #VmaDefragmentationFlagBits.
793typedef VkFlags VmaDefragmentationFlags;
794
795/// Operation performed on single defragmentation move. See structure #VmaDefragmentationMove.
796typedef enum VmaDefragmentationMoveOperation
797{
798 /// Buffer/image has been recreated at `dstTmpAllocation`, data has been copied, old buffer/image has been destroyed. `srcAllocation` should be changed to point to the new place. This is the default value set by vmaBeginDefragmentationPass().
799 VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY = 0,
800 /// Set this value if you cannot move the allocation. New place reserved at `dstTmpAllocation` will be freed. `srcAllocation` will remain unchanged.
801 VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE = 1,
802 /// Set this value if you decide to abandon the allocation and you destroyed the buffer/image. New place reserved at `dstTmpAllocation` will be freed, along with `srcAllocation`, which will be destroyed.
803 VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY = 2,
804} VmaDefragmentationMoveOperation;
805
806/** @} */
807
808/**
809\addtogroup group_virtual
810@{
811*/
812
813/// Flags to be passed as VmaVirtualBlockCreateInfo::flags.
814typedef enum VmaVirtualBlockCreateFlagBits
815{
816 /** \brief Enables alternative, linear allocation algorithm in this virtual block.
817
818 Specify this flag to enable linear allocation algorithm, which always creates
819 new allocations after last one and doesn't reuse space from allocations freed in
820 between. It trades memory consumption for simplified algorithm and data
821 structure, which has better performance and uses less memory for metadata.
822
823 By using this flag, you can achieve behavior of free-at-once, stack,
824 ring buffer, and double stack.
825 For details, see documentation chapter \ref linear_algorithm.
826 */
827 VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT = 0x00000001,
828
829 /** \brief Bit mask to extract only `ALGORITHM` bits from entire set of flags.
830 */
831 VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK =
832 VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT,
833
834 VMA_VIRTUAL_BLOCK_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
835} VmaVirtualBlockCreateFlagBits;
836/// Flags to be passed as VmaVirtualBlockCreateInfo::flags. See #VmaVirtualBlockCreateFlagBits.
837typedef VkFlags VmaVirtualBlockCreateFlags;
838
839/// Flags to be passed as VmaVirtualAllocationCreateInfo::flags.
840typedef enum VmaVirtualAllocationCreateFlagBits
841{
842 /** \brief Allocation will be created from upper stack in a double stack pool.
843
844 This flag is only allowed for virtual blocks created with #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT flag.
845 */
846 VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
847 /** \brief Allocation strategy that tries to minimize memory usage.
848 */
849 VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
850 /** \brief Allocation strategy that tries to minimize allocation time.
851 */
852 VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
853 /** Allocation strategy that chooses always the lowest offset in available space.
854 This is not the most efficient strategy but achieves highly packed data.
855 */
856 VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
857 /** \brief A bit mask to extract only `STRATEGY` bits from entire set of flags.
858
859 These strategy flags are binary compatible with equivalent flags in #VmaAllocationCreateFlagBits.
860 */
861 VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK = VMA_ALLOCATION_CREATE_STRATEGY_MASK,
862
863 VMA_VIRTUAL_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
864} VmaVirtualAllocationCreateFlagBits;
865/// Flags to be passed as VmaVirtualAllocationCreateInfo::flags. See #VmaVirtualAllocationCreateFlagBits.
866typedef VkFlags VmaVirtualAllocationCreateFlags;
867
868/** @} */
869
870#endif // _VMA_ENUM_DECLARATIONS
871
872#ifndef _VMA_DATA_TYPES_DECLARATIONS
873
874/**
875\addtogroup group_init
876@{ */
877
878/** \struct VmaAllocator
879\brief Represents main object of this library initialized.
880
881Fill structure #VmaAllocatorCreateInfo and call function vmaCreateAllocator() to create it.
882Call function vmaDestroyAllocator() to destroy it.
883
884It is recommended to create just one object of this type per `VkDevice` object,
885right after Vulkan is initialized and keep it alive until before Vulkan device is destroyed.
886*/
887VK_DEFINE_HANDLE(VmaAllocator)
888
889/** @} */
890
891/**
892\addtogroup group_alloc
893@{
894*/
895
896/** \struct VmaPool
897\brief Represents custom memory pool
898
899Fill structure VmaPoolCreateInfo and call function vmaCreatePool() to create it.
900Call function vmaDestroyPool() to destroy it.
901
902For more information see [Custom memory pools](@ref choosing_memory_type_custom_memory_pools).
903*/
904VK_DEFINE_HANDLE(VmaPool)
905
906/** \struct VmaAllocation
907\brief Represents single memory allocation.
908
909It may be either dedicated block of `VkDeviceMemory` or a specific region of a bigger block of this type
910plus unique offset.
911
912There are multiple ways to create such object.
913You need to fill structure VmaAllocationCreateInfo.
914For more information see [Choosing memory type](@ref choosing_memory_type).
915
916Although the library provides convenience functions that create Vulkan buffer or image,
917allocate memory for it and bind them together,
918binding of the allocation to a buffer or an image is out of scope of the allocation itself.
919Allocation object can exist without buffer/image bound,
920binding can be done manually by the user, and destruction of it can be done
921independently of destruction of the allocation.
922
923The object also remembers its size and some other information.
924To retrieve this information, use function vmaGetAllocationInfo() and inspect
925returned structure VmaAllocationInfo.
926*/
927VK_DEFINE_HANDLE(VmaAllocation)
928
929/** \struct VmaDefragmentationContext
930\brief An opaque object that represents started defragmentation process.
931
932Fill structure #VmaDefragmentationInfo and call function vmaBeginDefragmentation() to create it.
933Call function vmaEndDefragmentation() to destroy it.
934*/
935VK_DEFINE_HANDLE(VmaDefragmentationContext)
936
937/** @} */
938
939/**
940\addtogroup group_virtual
941@{
942*/
943
944/** \struct VmaVirtualAllocation
945\brief Represents single memory allocation done inside VmaVirtualBlock.
946
947Use it as a unique identifier to virtual allocation within the single block.
948
949Use value `VK_NULL_HANDLE` to represent a null/invalid allocation.
950*/
951VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaVirtualAllocation)
952
953/** @} */
954
955/**
956\addtogroup group_virtual
957@{
958*/
959
960/** \struct VmaVirtualBlock
961\brief Handle to a virtual block object that allows to use core allocation algorithm without allocating any real GPU memory.
962
963Fill in #VmaVirtualBlockCreateInfo structure and use vmaCreateVirtualBlock() to create it. Use vmaDestroyVirtualBlock() to destroy it.
964For more information, see documentation chapter \ref virtual_allocator.
965
966This object is not thread-safe - should not be used from multiple threads simultaneously, must be synchronized externally.
967*/
968VK_DEFINE_HANDLE(VmaVirtualBlock)
969
970/** @} */
971
972/**
973\addtogroup group_init
974@{
975*/
976
977/// Callback function called after successful vkAllocateMemory.
978typedef void (VKAPI_PTR* PFN_vmaAllocateDeviceMemoryFunction)(
979 VmaAllocator VMA_NOT_NULL allocator,
980 uint32_t memoryType,
981 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
982 VkDeviceSize size,
983 void* VMA_NULLABLE pUserData);
984
985/// Callback function called before vkFreeMemory.
986typedef void (VKAPI_PTR* PFN_vmaFreeDeviceMemoryFunction)(
987 VmaAllocator VMA_NOT_NULL allocator,
988 uint32_t memoryType,
989 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
990 VkDeviceSize size,
991 void* VMA_NULLABLE pUserData);
992
993/** \brief Set of callbacks that the library will call for `vkAllocateMemory` and `vkFreeMemory`.
994
995Provided for informative purpose, e.g. to gather statistics about number of
996allocations or total amount of memory allocated in Vulkan.
997
998Used in VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
999*/
1000typedef struct VmaDeviceMemoryCallbacks
1001{
1002 /// Optional, can be null.
1003 PFN_vmaAllocateDeviceMemoryFunction VMA_NULLABLE pfnAllocate;
1004 /// Optional, can be null.
1005 PFN_vmaFreeDeviceMemoryFunction VMA_NULLABLE pfnFree;
1006 /// Optional, can be null.
1007 void* VMA_NULLABLE pUserData;
1008} VmaDeviceMemoryCallbacks;
1009
1010/** \brief Pointers to some Vulkan functions - a subset used by the library.
1011
1012Used in VmaAllocatorCreateInfo::pVulkanFunctions.
1013*/
1014typedef struct VmaVulkanFunctions
1015{
1016 /// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
1017 PFN_vkGetInstanceProcAddr VMA_NULLABLE vkGetInstanceProcAddr;
1018 /// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
1019 PFN_vkGetDeviceProcAddr VMA_NULLABLE vkGetDeviceProcAddr;
1020 PFN_vkGetPhysicalDeviceProperties VMA_NULLABLE vkGetPhysicalDeviceProperties;
1021 PFN_vkGetPhysicalDeviceMemoryProperties VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties;
1022 PFN_vkAllocateMemory VMA_NULLABLE vkAllocateMemory;
1023 PFN_vkFreeMemory VMA_NULLABLE vkFreeMemory;
1024 PFN_vkMapMemory VMA_NULLABLE vkMapMemory;
1025 PFN_vkUnmapMemory VMA_NULLABLE vkUnmapMemory;
1026 PFN_vkFlushMappedMemoryRanges VMA_NULLABLE vkFlushMappedMemoryRanges;
1027 PFN_vkInvalidateMappedMemoryRanges VMA_NULLABLE vkInvalidateMappedMemoryRanges;
1028 PFN_vkBindBufferMemory VMA_NULLABLE vkBindBufferMemory;
1029 PFN_vkBindImageMemory VMA_NULLABLE vkBindImageMemory;
1030 PFN_vkGetBufferMemoryRequirements VMA_NULLABLE vkGetBufferMemoryRequirements;
1031 PFN_vkGetImageMemoryRequirements VMA_NULLABLE vkGetImageMemoryRequirements;
1032 PFN_vkCreateBuffer VMA_NULLABLE vkCreateBuffer;
1033 PFN_vkDestroyBuffer VMA_NULLABLE vkDestroyBuffer;
1034 PFN_vkCreateImage VMA_NULLABLE vkCreateImage;
1035 PFN_vkDestroyImage VMA_NULLABLE vkDestroyImage;
1036 PFN_vkCmdCopyBuffer VMA_NULLABLE vkCmdCopyBuffer;
1037#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
1038 /// Fetch "vkGetBufferMemoryRequirements2" on Vulkan >= 1.1, fetch "vkGetBufferMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
1039 PFN_vkGetBufferMemoryRequirements2KHR VMA_NULLABLE vkGetBufferMemoryRequirements2KHR;
1040 /// Fetch "vkGetImageMemoryRequirements2" on Vulkan >= 1.1, fetch "vkGetImageMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
1041 PFN_vkGetImageMemoryRequirements2KHR VMA_NULLABLE vkGetImageMemoryRequirements2KHR;
1042#endif
1043#if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
1044 /// Fetch "vkBindBufferMemory2" on Vulkan >= 1.1, fetch "vkBindBufferMemory2KHR" when using VK_KHR_bind_memory2 extension.
1045 PFN_vkBindBufferMemory2KHR VMA_NULLABLE vkBindBufferMemory2KHR;
1046 /// Fetch "vkBindImageMemory2" on Vulkan >= 1.1, fetch "vkBindImageMemory2KHR" when using VK_KHR_bind_memory2 extension.
1047 PFN_vkBindImageMemory2KHR VMA_NULLABLE vkBindImageMemory2KHR;
1048#endif
1049#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
1050 /// Fetch from "vkGetPhysicalDeviceMemoryProperties2" on Vulkan >= 1.1, but you can also fetch it from "vkGetPhysicalDeviceMemoryProperties2KHR" if you enabled extension VK_KHR_get_physical_device_properties2.
1051 PFN_vkGetPhysicalDeviceMemoryProperties2KHR VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties2KHR;
1052#endif
1053#if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
1054 /// Fetch from "vkGetDeviceBufferMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceBufferMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
1055 PFN_vkGetDeviceBufferMemoryRequirementsKHR VMA_NULLABLE vkGetDeviceBufferMemoryRequirements;
1056 /// Fetch from "vkGetDeviceImageMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceImageMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
1057 PFN_vkGetDeviceImageMemoryRequirementsKHR VMA_NULLABLE vkGetDeviceImageMemoryRequirements;
1058#endif
1059#if VMA_EXTERNAL_MEMORY_WIN32
1060 PFN_vkGetMemoryWin32HandleKHR VMA_NULLABLE vkGetMemoryWin32HandleKHR;
1061#else
1062 void* VMA_NULLABLE vkGetMemoryWin32HandleKHR;
1063#endif
1064} VmaVulkanFunctions;
1065
1066/// Description of a Allocator to be created.
1067typedef struct VmaAllocatorCreateInfo
1068{
1069 /// Flags for created allocator. Use #VmaAllocatorCreateFlagBits enum.
1070 VmaAllocatorCreateFlags flags;
1071 /// Vulkan physical device.
1072 /** It must be valid throughout whole lifetime of created allocator. */
1073 VkPhysicalDevice VMA_NOT_NULL physicalDevice;
1074 /// Vulkan device.
1075 /** It must be valid throughout whole lifetime of created allocator. */
1076 VkDevice VMA_NOT_NULL device;
1077 /// Preferred size of a single `VkDeviceMemory` block to be allocated from large heaps > 1 GiB. Optional.
1078 /** Set to 0 to use default, which is currently 256 MiB. */
1079 VkDeviceSize preferredLargeHeapBlockSize;
1080 /// Custom CPU memory allocation callbacks. Optional.
1081 /** Optional, can be null. When specified, will also be used for all CPU-side memory allocations. */
1082 const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
1083 /// Informative callbacks for `vkAllocateMemory`, `vkFreeMemory`. Optional.
1084 /** Optional, can be null. */
1085 const VmaDeviceMemoryCallbacks* VMA_NULLABLE pDeviceMemoryCallbacks;
1086 /** \brief Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out of particular Vulkan memory heap.
1087
1088 If not NULL, it must be a pointer to an array of
1089 `VkPhysicalDeviceMemoryProperties::memoryHeapCount` elements, defining limit on
1090 maximum number of bytes that can be allocated out of particular Vulkan memory
1091 heap.
1092
1093 Any of the elements may be equal to `VK_WHOLE_SIZE`, which means no limit on that
1094 heap. This is also the default in case of `pHeapSizeLimit` = NULL.
1095
1096 If there is a limit defined for a heap:
1097
1098 - If user tries to allocate more memory from that heap using this allocator,
1099 the allocation fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
1100 - If the limit is smaller than heap size reported in `VkMemoryHeap::size`, the
1101 value of this limit will be reported instead when using vmaGetMemoryProperties().
1102
1103 Warning! Using this feature may not be equivalent to installing a GPU with
1104 smaller amount of memory, because graphics driver doesn't necessary fail new
1105 allocations with `VK_ERROR_OUT_OF_DEVICE_MEMORY` result when memory capacity is
1106 exceeded. It may return success and just silently migrate some device memory
1107 blocks to system RAM. This driver behavior can also be controlled using
1108 VK_AMD_memory_overallocation_behavior extension.
1109 */
1110 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pHeapSizeLimit;
1111
1112 /** \brief Pointers to Vulkan functions. Can be null.
1113
1114 For details see [Pointers to Vulkan functions](@ref config_Vulkan_functions).
1115 */
1116 const VmaVulkanFunctions* VMA_NULLABLE pVulkanFunctions;
1117 /** \brief Handle to Vulkan instance object.
1118
1119 Starting from version 3.0.0 this member is no longer optional, it must be set!
1120 */
1121 VkInstance VMA_NOT_NULL instance;
1122 /** \brief Optional. Vulkan version that the application uses.
1123
1124 It must be a value in the format as created by macro `VK_MAKE_VERSION` or a constant like: `VK_API_VERSION_1_1`, `VK_API_VERSION_1_0`.
1125 The patch version number specified is ignored. Only the major and minor versions are considered.
1126 Only versions 1.0...1.4 are supported by the current implementation.
1127 Leaving it initialized to zero is equivalent to `VK_API_VERSION_1_0`.
1128 It must match the Vulkan version used by the application and supported on the selected physical device,
1129 so it must be no higher than `VkApplicationInfo::apiVersion` passed to `vkCreateInstance`
1130 and no higher than `VkPhysicalDeviceProperties::apiVersion` found on the physical device used.
1131 */
1132 uint32_t vulkanApiVersion;
1133#if VMA_EXTERNAL_MEMORY
1134 /** \brief Either null or a pointer to an array of external memory handle types for each Vulkan memory type.
1135
1136 If not NULL, it must be a pointer to an array of `VkPhysicalDeviceMemoryProperties::memoryTypeCount`
1137 elements, defining external memory handle types of particular Vulkan memory type,
1138 to be passed using `VkExportMemoryAllocateInfoKHR`.
1139
1140 Any of the elements may be equal to 0, which means not to use `VkExportMemoryAllocateInfoKHR` on this memory type.
1141 This is also the default in case of `pTypeExternalMemoryHandleTypes` = NULL.
1142 */
1143 const VkExternalMemoryHandleTypeFlagsKHR* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryTypeCount") pTypeExternalMemoryHandleTypes;
1144#endif // #if VMA_EXTERNAL_MEMORY
1145} VmaAllocatorCreateInfo;
1146
1147/// Information about existing #VmaAllocator object.
1148typedef struct VmaAllocatorInfo
1149{
1150 /** \brief Handle to Vulkan instance object.
1151
1152 This is the same value as has been passed through VmaAllocatorCreateInfo::instance.
1153 */
1154 VkInstance VMA_NOT_NULL instance;
1155 /** \brief Handle to Vulkan physical device object.
1156
1157 This is the same value as has been passed through VmaAllocatorCreateInfo::physicalDevice.
1158 */
1159 VkPhysicalDevice VMA_NOT_NULL physicalDevice;
1160 /** \brief Handle to Vulkan device object.
1161
1162 This is the same value as has been passed through VmaAllocatorCreateInfo::device.
1163 */
1164 VkDevice VMA_NOT_NULL device;
1165} VmaAllocatorInfo;
1166
1167/** @} */
1168
1169/**
1170\addtogroup group_stats
1171@{
1172*/
1173
1174/** \brief Calculated statistics of memory usage e.g. in a specific memory type, heap, custom pool, or total.
1175
1176These are fast to calculate.
1177See functions: vmaGetHeapBudgets(), vmaGetPoolStatistics().
1178*/
1179typedef struct VmaStatistics
1180{
1181 /** \brief Number of `VkDeviceMemory` objects - Vulkan memory blocks allocated.
1182 */
1183 uint32_t blockCount;
1184 /** \brief Number of #VmaAllocation objects allocated.
1185
1186 Dedicated allocations have their own blocks, so each one adds 1 to `allocationCount` as well as `blockCount`.
1187 */
1188 uint32_t allocationCount;
1189 /** \brief Number of bytes allocated in `VkDeviceMemory` blocks.
1190
1191 \note To avoid confusion, please be aware that what Vulkan calls an "allocation" - a whole `VkDeviceMemory` object
1192 (e.g. as in `VkPhysicalDeviceLimits::maxMemoryAllocationCount`) is called a "block" in VMA, while VMA calls
1193 "allocation" a #VmaAllocation object that represents a memory region sub-allocated from such block, usually for a single buffer or image.
1194 */
1195 VkDeviceSize blockBytes;
1196 /** \brief Total number of bytes occupied by all #VmaAllocation objects.
1197
1198 Always less or equal than `blockBytes`.
1199 Difference `(blockBytes - allocationBytes)` is the amount of memory allocated from Vulkan
1200 but unused by any #VmaAllocation.
1201 */
1202 VkDeviceSize allocationBytes;
1203} VmaStatistics;
1204
1205/** \brief More detailed statistics than #VmaStatistics.
1206
1207These are slower to calculate. Use for debugging purposes.
1208See functions: vmaCalculateStatistics(), vmaCalculatePoolStatistics().
1209
1210Previous version of the statistics API provided averages, but they have been removed
1211because they can be easily calculated as:
1212
1213\code
1214VkDeviceSize allocationSizeAvg = detailedStats.statistics.allocationBytes / detailedStats.statistics.allocationCount;
1215VkDeviceSize unusedBytes = detailedStats.statistics.blockBytes - detailedStats.statistics.allocationBytes;
1216VkDeviceSize unusedRangeSizeAvg = unusedBytes / detailedStats.unusedRangeCount;
1217\endcode
1218*/
1219typedef struct VmaDetailedStatistics
1220{
1221 /// Basic statistics.
1222 VmaStatistics statistics;
1223 /// Number of free ranges of memory between allocations.
1224 uint32_t unusedRangeCount;
1225 /// Smallest allocation size. `VK_WHOLE_SIZE` if there are 0 allocations.
1226 VkDeviceSize allocationSizeMin;
1227 /// Largest allocation size. 0 if there are 0 allocations.
1228 VkDeviceSize allocationSizeMax;
1229 /// Smallest empty range size. `VK_WHOLE_SIZE` if there are 0 empty ranges.
1230 VkDeviceSize unusedRangeSizeMin;
1231 /// Largest empty range size. 0 if there are 0 empty ranges.
1232 VkDeviceSize unusedRangeSizeMax;
1233} VmaDetailedStatistics;
1234
1235/** \brief General statistics from current state of the Allocator -
1236total memory usage across all memory heaps and types.
1237
1238These are slower to calculate. Use for debugging purposes.
1239See function vmaCalculateStatistics().
1240*/
1241typedef struct VmaTotalStatistics
1242{
1243 VmaDetailedStatistics memoryType[VK_MAX_MEMORY_TYPES];
1244 VmaDetailedStatistics memoryHeap[VK_MAX_MEMORY_HEAPS];
1245 VmaDetailedStatistics total;
1246} VmaTotalStatistics;
1247
1248/** \brief Statistics of current memory usage and available budget for a specific memory heap.
1249
1250These are fast to calculate.
1251See function vmaGetHeapBudgets().
1252*/
1253typedef struct VmaBudget
1254{
1255 /** \brief Statistics fetched from the library.
1256 */
1257 VmaStatistics statistics;
1258 /** \brief Estimated current memory usage of the program, in bytes.
1259
1260 Fetched from system using VK_EXT_memory_budget extension if enabled.
1261
1262 It might be different than `statistics.blockBytes` (usually higher) due to additional implicit objects
1263 also occupying the memory, like swapchain, pipelines, descriptor heaps, command buffers, or
1264 `VkDeviceMemory` blocks allocated outside of this library, if any.
1265 */
1266 VkDeviceSize usage;
1267 /** \brief Estimated amount of memory available to the program, in bytes.
1268
1269 Fetched from system using VK_EXT_memory_budget extension if enabled.
1270
1271 It might be different (most probably smaller) than `VkMemoryHeap::size[heapIndex]` due to factors
1272 external to the program, decided by the operating system.
1273 Difference `budget - usage` is the amount of additional memory that can probably
1274 be allocated without problems. Exceeding the budget may result in various problems.
1275 */
1276 VkDeviceSize budget;
1277} VmaBudget;
1278
1279/** @} */
1280
1281/**
1282\addtogroup group_alloc
1283@{
1284*/
1285
1286/** \brief Parameters of new #VmaAllocation.
1287
1288To be used with functions like vmaCreateBuffer(), vmaCreateImage(), and many others.
1289*/
1290typedef struct VmaAllocationCreateInfo
1291{
1292 /// Use #VmaAllocationCreateFlagBits enum.
1293 VmaAllocationCreateFlags flags;
1294 /** \brief Intended usage of memory.
1295
1296 You can leave #VMA_MEMORY_USAGE_UNKNOWN if you specify memory requirements in other way. \n
1297 If `pool` is not null, this member is ignored.
1298 */
1299 VmaMemoryUsage usage;
1300 /** \brief Flags that must be set in a Memory Type chosen for an allocation.
1301
1302 Leave 0 if you specify memory requirements in other way. \n
1303 If `pool` is not null, this member is ignored.*/
1304 VkMemoryPropertyFlags requiredFlags;
1305 /** \brief Flags that preferably should be set in a memory type chosen for an allocation.
1306
1307 Set to 0 if no additional flags are preferred. \n
1308 If `pool` is not null, this member is ignored. */
1309 VkMemoryPropertyFlags preferredFlags;
1310 /** \brief Bitmask containing one bit set for every memory type acceptable for this allocation.
1311
1312 Value 0 is equivalent to `UINT32_MAX` - it means any memory type is accepted if
1313 it meets other requirements specified by this structure, with no further
1314 restrictions on memory type index. \n
1315 If `pool` is not null, this member is ignored.
1316 */
1317 uint32_t memoryTypeBits;
1318 /** \brief Pool that this allocation should be created in.
1319
1320 Leave `VK_NULL_HANDLE` to allocate from default pool. If not null, members:
1321 `usage`, `requiredFlags`, `preferredFlags`, `memoryTypeBits` are ignored.
1322 */
1323 VmaPool VMA_NULLABLE pool;
1324 /** \brief Custom general-purpose pointer that will be stored in #VmaAllocation, can be read as VmaAllocationInfo::pUserData and changed using vmaSetAllocationUserData().
1325
1326 If #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT is used, it must be either
1327 null or pointer to a null-terminated string. The string will be then copied to
1328 internal buffer, so it doesn't need to be valid after allocation call.
1329 */
1330 void* VMA_NULLABLE pUserData;
1331 /** \brief A floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
1332
1333 It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object
1334 and this allocation ends up as dedicated or is explicitly forced as dedicated using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
1335 Otherwise, it has the priority of a memory block where it is placed and this variable is ignored.
1336 */
1337 float priority;
1338} VmaAllocationCreateInfo;
1339
1340/// Describes parameter of created #VmaPool.
1341typedef struct VmaPoolCreateInfo
1342{
1343 /** \brief Vulkan memory type index to allocate this pool from.
1344 */
1345 uint32_t memoryTypeIndex;
1346 /** \brief Use combination of #VmaPoolCreateFlagBits.
1347 */
1348 VmaPoolCreateFlags flags;
1349 /** \brief Size of a single `VkDeviceMemory` block to be allocated as part of this pool, in bytes. Optional.
1350
1351 Specify nonzero to set explicit, constant size of memory blocks used by this
1352 pool.
1353
1354 Leave 0 to use default and let the library manage block sizes automatically.
1355 Sizes of particular blocks may vary.
1356 In this case, the pool will also support dedicated allocations.
1357 */
1358 VkDeviceSize blockSize;
1359 /** \brief Minimum number of blocks to be always allocated in this pool, even if they stay empty.
1360
1361 Set to 0 to have no preallocated blocks and allow the pool be completely empty.
1362 */
1363 size_t minBlockCount;
1364 /** \brief Maximum number of blocks that can be allocated in this pool. Optional.
1365
1366 Set to 0 to use default, which is `SIZE_MAX`, which means no limit.
1367
1368 Set to same value as VmaPoolCreateInfo::minBlockCount to have fixed amount of memory allocated
1369 throughout whole lifetime of this pool.
1370 */
1371 size_t maxBlockCount;
1372 /** \brief A floating-point value between 0 and 1, indicating the priority of the allocations in this pool relative to other memory allocations.
1373
1374 It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object.
1375 Otherwise, this variable is ignored.
1376 */
1377 float priority;
1378 /** \brief Additional minimum alignment to be used for all allocations created from this pool. Can be 0.
1379
1380 Leave 0 (default) not to impose any additional alignment. If not 0, it must be a power of two.
1381 It can be useful in cases where alignment returned by Vulkan by functions like `vkGetBufferMemoryRequirements` is not enough,
1382 e.g. when doing interop with OpenGL.
1383 */
1384 VkDeviceSize minAllocationAlignment;
1385 /** \brief Additional `pNext` chain to be attached to `VkMemoryAllocateInfo` used for every allocation made by this pool. Optional.
1386
1387 Optional, can be null. If not null, it must point to a `pNext` chain of structures that can be attached to `VkMemoryAllocateInfo`.
1388 It can be useful for special needs such as adding `VkExportMemoryAllocateInfoKHR`.
1389 Structures pointed by this member must remain alive and unchanged for the whole lifetime of the custom pool.
1390
1391 Please note that some structures, e.g. `VkMemoryPriorityAllocateInfoEXT`, `VkMemoryDedicatedAllocateInfoKHR`,
1392 can be attached automatically by this library when using other, more convenient of its features.
1393 */
1394 void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkMemoryAllocateInfo) pMemoryAllocateNext;
1395} VmaPoolCreateInfo;
1396
1397/** @} */
1398
1399/**
1400\addtogroup group_alloc
1401@{
1402*/
1403
1404/**
1405Parameters of #VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
1406
1407There is also an extended version of this structure that carries additional parameters: #VmaAllocationInfo2.
1408*/
1409typedef struct VmaAllocationInfo
1410{
1411 /** \brief Memory type index that this allocation was allocated from.
1412
1413 It never changes.
1414 */
1415 uint32_t memoryType;
1416 /** \brief Handle to Vulkan memory object.
1417
1418 Same memory object can be shared by multiple allocations.
1419
1420 It can change after the allocation is moved during \ref defragmentation.
1421 */
1422 VkDeviceMemory VMA_NULLABLE_NON_DISPATCHABLE deviceMemory;
1423 /** \brief Offset in `VkDeviceMemory` object to the beginning of this allocation, in bytes. `(deviceMemory, offset)` pair is unique to this allocation.
1424
1425 You usually don't need to use this offset. If you create a buffer or an image together with the allocation using e.g. function
1426 vmaCreateBuffer(), vmaCreateImage(), functions that operate on these resources refer to the beginning of the buffer or image,
1427 not entire device memory block. Functions like vmaMapMemory(), vmaBindBufferMemory() also refer to the beginning of the allocation
1428 and apply this offset automatically.
1429
1430 It can change after the allocation is moved during \ref defragmentation.
1431 */
1432 VkDeviceSize offset;
1433 /** \brief Size of this allocation, in bytes.
1434
1435 It never changes.
1436
1437 \note Allocation size returned in this variable may be greater than the size
1438 requested for the resource e.g. as `VkBufferCreateInfo::size`. Whole size of the
1439 allocation is accessible for operations on memory e.g. using a pointer after
1440 mapping with vmaMapMemory(), but operations on the resource e.g. using
1441 `vkCmdCopyBuffer` must be limited to the size of the resource.
1442 */
1443 VkDeviceSize size;
1444 /** \brief Pointer to the beginning of this allocation as mapped data.
1445
1446 If the allocation hasn't been mapped using vmaMapMemory() and hasn't been
1447 created with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag, this value is null.
1448
1449 It can change after call to vmaMapMemory(), vmaUnmapMemory().
1450 It can also change after the allocation is moved during \ref defragmentation.
1451 */
1452 void* VMA_NULLABLE pMappedData;
1453 /** \brief Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vmaSetAllocationUserData().
1454
1455 It can change after call to vmaSetAllocationUserData() for this allocation.
1456 */
1457 void* VMA_NULLABLE pUserData;
1458 /** \brief Custom allocation name that was set with vmaSetAllocationName().
1459
1460 It can change after call to vmaSetAllocationName() for this allocation.
1461
1462 Another way to set custom name is to pass it in VmaAllocationCreateInfo::pUserData with
1463 additional flag #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT set [DEPRECATED].
1464 */
1465 const char* VMA_NULLABLE pName;
1466} VmaAllocationInfo;
1467
1468/// Extended parameters of a #VmaAllocation object that can be retrieved using function vmaGetAllocationInfo2().
1469typedef struct VmaAllocationInfo2
1470{
1471 /** \brief Basic parameters of the allocation.
1472
1473 If you need only these, you can use function vmaGetAllocationInfo() and structure #VmaAllocationInfo instead.
1474 */
1475 VmaAllocationInfo allocationInfo;
1476 /** \brief Size of the `VkDeviceMemory` block that the allocation belongs to.
1477
1478 In case of an allocation with dedicated memory, it will be equal to `allocationInfo.size`.
1479 */
1480 VkDeviceSize blockSize;
1481 /** \brief `VK_TRUE` if the allocation has dedicated memory, `VK_FALSE` if it was placed as part of a larger memory block.
1482
1483 When `VK_TRUE`, it also means `VkMemoryDedicatedAllocateInfo` was used when creating the allocation
1484 (if VK_KHR_dedicated_allocation extension or Vulkan version >= 1.1 is enabled).
1485 */
1486 VkBool32 dedicatedMemory;
1487} VmaAllocationInfo2;
1488
1489/** Callback function called during vmaBeginDefragmentation() to check custom criterion about ending current defragmentation pass.
1490
1491Should return true if the defragmentation needs to stop current pass.
1492*/
1493typedef VkBool32 (VKAPI_PTR* PFN_vmaCheckDefragmentationBreakFunction)(void* VMA_NULLABLE pUserData);
1494
1495/** \brief Parameters for defragmentation.
1496
1497To be used with function vmaBeginDefragmentation().
1498*/
1499typedef struct VmaDefragmentationInfo
1500{
1501 /// \brief Use combination of #VmaDefragmentationFlagBits.
1502 VmaDefragmentationFlags flags;
1503 /** \brief Custom pool to be defragmented.
1504
1505 If null then default pools will undergo defragmentation process.
1506 */
1507 VmaPool VMA_NULLABLE pool;
1508 /** \brief Maximum numbers of bytes that can be copied during single pass, while moving allocations to different places.
1509
1510 `0` means no limit.
1511 */
1512 VkDeviceSize maxBytesPerPass;
1513 /** \brief Maximum number of allocations that can be moved during single pass to a different place.
1514
1515 `0` means no limit.
1516 */
1517 uint32_t maxAllocationsPerPass;
1518 /** \brief Optional custom callback for stopping vmaBeginDefragmentation().
1519
1520 Have to return true for breaking current defragmentation pass.
1521 */
1522 PFN_vmaCheckDefragmentationBreakFunction VMA_NULLABLE pfnBreakCallback;
1523 /// \brief Optional data to pass to custom callback for stopping pass of defragmentation.
1524 void* VMA_NULLABLE pBreakCallbackUserData;
1525} VmaDefragmentationInfo;
1526
1527/// Single move of an allocation to be done for defragmentation.
1528typedef struct VmaDefragmentationMove
1529{
1530 /// Operation to be performed on the allocation by vmaEndDefragmentationPass(). Default value is #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY. You can modify it.
1531 VmaDefragmentationMoveOperation operation;
1532 /// Allocation that should be moved.
1533 VmaAllocation VMA_NOT_NULL srcAllocation;
1534 /** \brief Temporary allocation pointing to destination memory that will replace `srcAllocation`.
1535
1536 \warning Do not store this allocation in your data structures! It exists only temporarily, for the duration of the defragmentation pass,
1537 to be used for binding new buffer/image to the destination memory using e.g. vmaBindBufferMemory().
1538 vmaEndDefragmentationPass() will destroy it and make `srcAllocation` point to this memory.
1539 */
1540 VmaAllocation VMA_NOT_NULL dstTmpAllocation;
1541} VmaDefragmentationMove;
1542
1543/** \brief Parameters for incremental defragmentation steps.
1544
1545To be used with function vmaBeginDefragmentationPass().
1546*/
1547typedef struct VmaDefragmentationPassMoveInfo
1548{
1549 /// Number of elements in the `pMoves` array.
1550 uint32_t moveCount;
1551 /** \brief Array of moves to be performed by the user in the current defragmentation pass.
1552
1553 Pointer to an array of `moveCount` elements, owned by VMA, created in vmaBeginDefragmentationPass(), destroyed in vmaEndDefragmentationPass().
1554
1555 For each element, you should:
1556
1557 1. Create a new buffer/image in the place pointed by VmaDefragmentationMove::dstMemory + VmaDefragmentationMove::dstOffset.
1558 2. Copy data from the VmaDefragmentationMove::srcAllocation e.g. using `vkCmdCopyBuffer`, `vkCmdCopyImage`.
1559 3. Make sure these commands finished executing on the GPU.
1560 4. Destroy the old buffer/image.
1561
1562 Only then you can finish defragmentation pass by calling vmaEndDefragmentationPass().
1563 After this call, the allocation will point to the new place in memory.
1564
1565 Alternatively, if you cannot move specific allocation, you can set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
1566
1567 Alternatively, if you decide you want to completely remove the allocation:
1568
1569 1. Destroy its buffer/image.
1570 2. Set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
1571
1572 Then, after vmaEndDefragmentationPass() the allocation will be freed.
1573 */
1574 VmaDefragmentationMove* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(moveCount) pMoves;
1575} VmaDefragmentationPassMoveInfo;
1576
1577/// Statistics returned for defragmentation process in function vmaEndDefragmentation().
1578typedef struct VmaDefragmentationStats
1579{
1580 /// Total number of bytes that have been copied while moving allocations to different places.
1581 VkDeviceSize bytesMoved;
1582 /// Total number of bytes that have been released to the system by freeing empty `VkDeviceMemory` objects.
1583 VkDeviceSize bytesFreed;
1584 /// Number of allocations that have been moved to different places.
1585 uint32_t allocationsMoved;
1586 /// Number of empty `VkDeviceMemory` objects that have been released to the system.
1587 uint32_t deviceMemoryBlocksFreed;
1588} VmaDefragmentationStats;
1589
1590/** @} */
1591
1592/**
1593\addtogroup group_virtual
1594@{
1595*/
1596
1597/// Parameters of created #VmaVirtualBlock object to be passed to vmaCreateVirtualBlock().
1598typedef struct VmaVirtualBlockCreateInfo
1599{
1600 /** \brief Total size of the virtual block.
1601
1602 Sizes can be expressed in bytes or any units you want as long as you are consistent in using them.
1603 For example, if you allocate from some array of structures, 1 can mean single instance of entire structure.
1604 */
1605 VkDeviceSize size;
1606
1607 /** \brief Use combination of #VmaVirtualBlockCreateFlagBits.
1608 */
1609 VmaVirtualBlockCreateFlags flags;
1610
1611 /** \brief Custom CPU memory allocation callbacks. Optional.
1612
1613 Optional, can be null. When specified, they will be used for all CPU-side memory allocations.
1614 */
1615 const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
1616} VmaVirtualBlockCreateInfo;
1617
1618/// Parameters of created virtual allocation to be passed to vmaVirtualAllocate().
1619typedef struct VmaVirtualAllocationCreateInfo
1620{
1621 /** \brief Size of the allocation.
1622
1623 Cannot be zero.
1624 */
1625 VkDeviceSize size;
1626 /** \brief Required alignment of the allocation. Optional.
1627
1628 Must be power of two. Special value 0 has the same meaning as 1 - means no special alignment is required, so allocation can start at any offset.
1629 */
1630 VkDeviceSize alignment;
1631 /** \brief Use combination of #VmaVirtualAllocationCreateFlagBits.
1632 */
1633 VmaVirtualAllocationCreateFlags flags;
1634 /** \brief Custom pointer to be associated with the allocation. Optional.
1635
1636 It can be any value and can be used for user-defined purposes. It can be fetched or changed later.
1637 */
1638 void* VMA_NULLABLE pUserData;
1639} VmaVirtualAllocationCreateInfo;
1640
1641/// Parameters of an existing virtual allocation, returned by vmaGetVirtualAllocationInfo().
1642typedef struct VmaVirtualAllocationInfo
1643{
1644 /** \brief Offset of the allocation.
1645
1646 Offset at which the allocation was made.
1647 */
1648 VkDeviceSize offset;
1649 /** \brief Size of the allocation.
1650
1651 Same value as passed in VmaVirtualAllocationCreateInfo::size.
1652 */
1653 VkDeviceSize size;
1654 /** \brief Custom pointer associated with the allocation.
1655
1656 Same value as passed in VmaVirtualAllocationCreateInfo::pUserData or to vmaSetVirtualAllocationUserData().
1657 */
1658 void* VMA_NULLABLE pUserData;
1659} VmaVirtualAllocationInfo;
1660
1661/** @} */
1662
1663#endif // _VMA_DATA_TYPES_DECLARATIONS
1664
1665#ifndef _VMA_FUNCTION_HEADERS
1666
1667/**
1668\addtogroup group_init
1669@{
1670*/
1671
1672/// Creates #VmaAllocator object.
1673VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
1674 const VmaAllocatorCreateInfo* VMA_NOT_NULL pCreateInfo,
1675 VmaAllocator VMA_NULLABLE* VMA_NOT_NULL pAllocator);
1676
1677/// Destroys allocator object.
1678VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
1679 VmaAllocator VMA_NULLABLE allocator);
1680
1681/** \brief Returns information about existing #VmaAllocator object - handle to Vulkan device etc.
1682
1683It might be useful if you want to keep just the #VmaAllocator handle and fetch other required handles to
1684`VkPhysicalDevice`, `VkDevice` etc. every time using this function.
1685*/
1686VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(
1687 VmaAllocator VMA_NOT_NULL allocator,
1688 VmaAllocatorInfo* VMA_NOT_NULL pAllocatorInfo);
1689
1690/**
1691PhysicalDeviceProperties are fetched from physicalDevice by the allocator.
1692You can access it here, without fetching it again on your own.
1693*/
1694VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
1695 VmaAllocator VMA_NOT_NULL allocator,
1696 const VkPhysicalDeviceProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceProperties);
1697
1698/**
1699PhysicalDeviceMemoryProperties are fetched from physicalDevice by the allocator.
1700You can access it here, without fetching it again on your own.
1701*/
1702VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
1703 VmaAllocator VMA_NOT_NULL allocator,
1704 const VkPhysicalDeviceMemoryProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceMemoryProperties);
1705
1706/**
1707\brief Given Memory Type Index, returns Property Flags of this memory type.
1708
1709This is just a convenience function. Same information can be obtained using
1710vmaGetMemoryProperties().
1711*/
1712VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
1713 VmaAllocator VMA_NOT_NULL allocator,
1714 uint32_t memoryTypeIndex,
1715 VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
1716
1717/** \brief Sets index of the current frame.
1718*/
1719VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
1720 VmaAllocator VMA_NOT_NULL allocator,
1721 uint32_t frameIndex);
1722
1723/** @} */
1724
1725/**
1726\addtogroup group_stats
1727@{
1728*/
1729
1730/** \brief Retrieves statistics from current state of the Allocator.
1731
1732This function is called "calculate" not "get" because it has to traverse all
1733internal data structures, so it may be quite slow. Use it for debugging purposes.
1734For faster but more brief statistics suitable to be called every frame or every allocation,
1735use vmaGetHeapBudgets().
1736
1737Note that when using allocator from multiple threads, returned information may immediately
1738become outdated.
1739*/
1740VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
1741 VmaAllocator VMA_NOT_NULL allocator,
1742 VmaTotalStatistics* VMA_NOT_NULL pStats);
1743
1744/** \brief Retrieves information about current memory usage and budget for all memory heaps.
1745
1746\param allocator
1747\param[out] pBudgets Must point to array with number of elements at least equal to number of memory heaps in physical device used.
1748
1749This function is called "get" not "calculate" because it is very fast, suitable to be called
1750every frame or every allocation. For more detailed statistics use vmaCalculateStatistics().
1751
1752Note that when using allocator from multiple threads, returned information may immediately
1753become outdated.
1754*/
1755VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
1756 VmaAllocator VMA_NOT_NULL allocator,
1757 VmaBudget* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pBudgets);
1758
1759/** @} */
1760
1761/**
1762\addtogroup group_alloc
1763@{
1764*/
1765
1766/**
1767\brief Helps to find memoryTypeIndex, given memoryTypeBits and VmaAllocationCreateInfo.
1768
1769This algorithm tries to find a memory type that:
1770
1771- Is allowed by memoryTypeBits.
1772- Contains all the flags from pAllocationCreateInfo->requiredFlags.
1773- Matches intended usage.
1774- Has as many flags from pAllocationCreateInfo->preferredFlags as possible.
1775
1776\return Returns VK_ERROR_FEATURE_NOT_PRESENT if not found. Receiving such result
1777from this function or any other allocating function probably means that your
1778device doesn't support any memory type with requested features for the specific
1779type of resource you want to use it for. Please check parameters of your
1780resource, like image layout (OPTIMAL versus LINEAR) or mip level count.
1781*/
1782VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
1783 VmaAllocator VMA_NOT_NULL allocator,
1784 uint32_t memoryTypeBits,
1785 const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
1786 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
1787
1788/**
1789\brief Helps to find memoryTypeIndex, given VkBufferCreateInfo and VmaAllocationCreateInfo.
1790
1791It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
1792It internally creates a temporary, dummy buffer that never has memory bound.
1793*/
1794VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
1795 VmaAllocator VMA_NOT_NULL allocator,
1796 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
1797 const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
1798 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
1799
1800/**
1801\brief Helps to find memoryTypeIndex, given VkImageCreateInfo and VmaAllocationCreateInfo.
1802
1803It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
1804It internally creates a temporary, dummy image that never has memory bound.
1805*/
1806VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
1807 VmaAllocator VMA_NOT_NULL allocator,
1808 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
1809 const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
1810 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
1811
1812/** \brief Allocates Vulkan device memory and creates #VmaPool object.
1813
1814\param allocator Allocator object.
1815\param pCreateInfo Parameters of pool to create.
1816\param[out] pPool Handle to created pool.
1817*/
1818VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
1819 VmaAllocator VMA_NOT_NULL allocator,
1820 const VmaPoolCreateInfo* VMA_NOT_NULL pCreateInfo,
1821 VmaPool VMA_NULLABLE* VMA_NOT_NULL pPool);
1822
1823/** \brief Destroys #VmaPool object and frees Vulkan device memory.
1824*/
1825VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
1826 VmaAllocator VMA_NOT_NULL allocator,
1827 VmaPool VMA_NULLABLE pool);
1828
1829/** @} */
1830
1831/**
1832\addtogroup group_stats
1833@{
1834*/
1835
1836/** \brief Retrieves statistics of existing #VmaPool object.
1837
1838\param allocator Allocator object.
1839\param pool Pool object.
1840\param[out] pPoolStats Statistics of specified pool.
1841
1842Note that when using the pool from multiple threads, returned information may immediately
1843become outdated.
1844*/
1845VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
1846 VmaAllocator VMA_NOT_NULL allocator,
1847 VmaPool VMA_NOT_NULL pool,
1848 VmaStatistics* VMA_NOT_NULL pPoolStats);
1849
1850/** \brief Retrieves detailed statistics of existing #VmaPool object.
1851
1852\param allocator Allocator object.
1853\param pool Pool object.
1854\param[out] pPoolStats Statistics of specified pool.
1855*/
1856VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
1857 VmaAllocator VMA_NOT_NULL allocator,
1858 VmaPool VMA_NOT_NULL pool,
1859 VmaDetailedStatistics* VMA_NOT_NULL pPoolStats);
1860
1861/** @} */
1862
1863/**
1864\addtogroup group_alloc
1865@{
1866*/
1867
1868/** \brief Checks magic number in margins around all allocations in given memory pool in search for corruptions.
1869
1870Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
1871`VMA_DEBUG_MARGIN` is defined to nonzero and the pool is created in memory type that is
1872`HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
1873
1874Possible return values:
1875
1876- `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for specified pool.
1877- `VK_SUCCESS` - corruption detection has been performed and succeeded.
1878- `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
1879 `VMA_ASSERT` is also fired in that case.
1880- Other value: Error returned by Vulkan, e.g. memory mapping failure.
1881*/
1882VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(
1883 VmaAllocator VMA_NOT_NULL allocator,
1884 VmaPool VMA_NOT_NULL pool);
1885
1886/** \brief Retrieves name of a custom pool.
1887
1888After the call `ppName` is either null or points to an internally-owned null-terminated string
1889containing name of the pool that was previously set. The pointer becomes invalid when the pool is
1890destroyed or its name is changed using vmaSetPoolName().
1891*/
1892VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
1893 VmaAllocator VMA_NOT_NULL allocator,
1894 VmaPool VMA_NOT_NULL pool,
1895 const char* VMA_NULLABLE* VMA_NOT_NULL ppName);
1896
1897/** \brief Sets name of a custom pool.
1898
1899`pName` can be either null or pointer to a null-terminated string with new name for the pool.
1900Function makes internal copy of the string, so it can be changed or freed immediately after this call.
1901*/
1902VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
1903 VmaAllocator VMA_NOT_NULL allocator,
1904 VmaPool VMA_NOT_NULL pool,
1905 const char* VMA_NULLABLE pName);
1906
1907/** \brief General purpose memory allocation.
1908
1909\param allocator
1910\param pVkMemoryRequirements
1911\param pCreateInfo
1912\param[out] pAllocation Handle to allocated memory.
1913\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
1914
1915You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
1916
1917It is recommended to use vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage(),
1918vmaCreateBuffer(), vmaCreateImage() instead whenever possible.
1919*/
1920VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
1921 VmaAllocator VMA_NOT_NULL allocator,
1922 const VkMemoryRequirements* VMA_NOT_NULL pVkMemoryRequirements,
1923 const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
1924 VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
1925 VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
1926
1927/** \brief General purpose memory allocation for multiple allocation objects at once.
1928
1929\param allocator Allocator object.
1930\param pVkMemoryRequirements Memory requirements for each allocation.
1931\param pCreateInfo Creation parameters for each allocation.
1932\param allocationCount Number of allocations to make.
1933\param[out] pAllocations Pointer to array that will be filled with handles to created allocations.
1934\param[out] pAllocationInfo Optional. Pointer to array that will be filled with parameters of created allocations.
1935
1936You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
1937
1938Word "pages" is just a suggestion to use this function to allocate pieces of memory needed for sparse binding.
1939It is just a general purpose allocation function able to make multiple allocations at once.
1940It may be internally optimized to be more efficient than calling vmaAllocateMemory() `allocationCount` times.
1941
1942All allocations are made using same parameters. All of them are created out of the same memory pool and type.
1943If any allocation fails, all allocations already made within this function call are also freed, so that when
1944returned result is not `VK_SUCCESS`, `pAllocation` array is always entirely filled with `VK_NULL_HANDLE`.
1945*/
1946VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
1947 VmaAllocator VMA_NOT_NULL allocator,
1948 const VkMemoryRequirements* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pVkMemoryRequirements,
1949 const VmaAllocationCreateInfo* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pCreateInfo,
1950 size_t allocationCount,
1951 VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
1952 VmaAllocationInfo* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationInfo);
1953
1954/** \brief Allocates memory suitable for given `VkBuffer`.
1955
1956\param allocator
1957\param buffer
1958\param pCreateInfo
1959\param[out] pAllocation Handle to allocated memory.
1960\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
1961
1962It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindBufferMemory().
1963
1964This is a special-purpose function. In most cases you should use vmaCreateBuffer().
1965
1966You must free the allocation using vmaFreeMemory() when no longer needed.
1967*/
1968VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
1969 VmaAllocator VMA_NOT_NULL allocator,
1970 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
1971 const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
1972 VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
1973 VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
1974
1975/** \brief Allocates memory suitable for given `VkImage`.
1976
1977\param allocator
1978\param image
1979\param pCreateInfo
1980\param[out] pAllocation Handle to allocated memory.
1981\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
1982
1983It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindImageMemory().
1984
1985This is a special-purpose function. In most cases you should use vmaCreateImage().
1986
1987You must free the allocation using vmaFreeMemory() when no longer needed.
1988*/
1989VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
1990 VmaAllocator VMA_NOT_NULL allocator,
1991 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
1992 const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
1993 VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
1994 VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
1995
1996/** \brief Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().
1997
1998Passing `VK_NULL_HANDLE` as `allocation` is valid. Such function call is just skipped.
1999*/
2000VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
2001 VmaAllocator VMA_NOT_NULL allocator,
2002 const VmaAllocation VMA_NULLABLE allocation);
2003
2004/** \brief Frees memory and destroys multiple allocations.
2005
2006Word "pages" is just a suggestion to use this function to free pieces of memory used for sparse binding.
2007It is just a general purpose function to free memory and destroy allocations made using e.g. vmaAllocateMemory(),
2008vmaAllocateMemoryPages() and other functions.
2009It may be internally optimized to be more efficient than calling vmaFreeMemory() `allocationCount` times.
2010
2011Allocations in `pAllocations` array can come from any memory pools and types.
2012Passing `VK_NULL_HANDLE` as elements of `pAllocations` array is valid. Such entries are just skipped.
2013*/
2014VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
2015 VmaAllocator VMA_NOT_NULL allocator,
2016 size_t allocationCount,
2017 const VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations);
2018
2019/** \brief Returns current information about specified allocation.
2020
2021Current parameters of given allocation are returned in `pAllocationInfo`.
2022
2023Although this function doesn't lock any mutex, so it should be quite efficient,
2024you should avoid calling it too often.
2025You can retrieve same VmaAllocationInfo structure while creating your resource, from function
2026vmaCreateBuffer(), vmaCreateImage(). You can remember it if you are sure parameters don't change
2027(e.g. due to defragmentation).
2028
2029There is also a new function vmaGetAllocationInfo2() that offers extended information
2030about the allocation, returned using new structure #VmaAllocationInfo2.
2031*/
2032VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
2033 VmaAllocator VMA_NOT_NULL allocator,
2034 VmaAllocation VMA_NOT_NULL allocation,
2035 VmaAllocationInfo* VMA_NOT_NULL pAllocationInfo);
2036
2037/** \brief Returns extended information about specified allocation.
2038
2039Current parameters of given allocation are returned in `pAllocationInfo`.
2040Extended parameters in structure #VmaAllocationInfo2 include memory block size
2041and a flag telling whether the allocation has dedicated memory.
2042It can be useful e.g. for interop with OpenGL.
2043*/
2044VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo2(
2045 VmaAllocator VMA_NOT_NULL allocator,
2046 VmaAllocation VMA_NOT_NULL allocation,
2047 VmaAllocationInfo2* VMA_NOT_NULL pAllocationInfo);
2048
2049/** \brief Sets pUserData in given allocation to new value.
2050
2051The value of pointer `pUserData` is copied to allocation's `pUserData`.
2052It is opaque, so you can use it however you want - e.g.
2053as a pointer, ordinal number or some handle to you own data.
2054*/
2055VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
2056 VmaAllocator VMA_NOT_NULL allocator,
2057 VmaAllocation VMA_NOT_NULL allocation,
2058 void* VMA_NULLABLE pUserData);
2059
2060/** \brief Sets pName in given allocation to new value.
2061
2062`pName` must be either null, or pointer to a null-terminated string. The function
2063makes local copy of the string and sets it as allocation's `pName`. String
2064passed as pName doesn't need to be valid for whole lifetime of the allocation -
2065you can free it after this call. String previously pointed by allocation's
2066`pName` is freed from memory.
2067*/
2068VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
2069 VmaAllocator VMA_NOT_NULL allocator,
2070 VmaAllocation VMA_NOT_NULL allocation,
2071 const char* VMA_NULLABLE pName);
2072
2073/**
2074\brief Given an allocation, returns Property Flags of its memory type.
2075
2076This is just a convenience function. Same information can be obtained using
2077vmaGetAllocationInfo() + vmaGetMemoryProperties().
2078*/
2079VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
2080 VmaAllocator VMA_NOT_NULL allocator,
2081 VmaAllocation VMA_NOT_NULL allocation,
2082 VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
2083
2084
2085#if VMA_EXTERNAL_MEMORY_WIN32
2086/**
2087\brief Given an allocation, returns Win32 handle that may be imported by other processes or APIs.
2088
2089\param hTargetProcess Must be a valid handle to target process or null. If it's null, the function returns
2090 handle for the current process.
2091\param[out] pHandle Output parameter that returns the handle.
2092
2093The function fills `pHandle` with handle that can be used in target process.
2094The handle is fetched using function `vkGetMemoryWin32HandleKHR`.
2095When no longer needed, you must close it using:
2096
2097\code
2098CloseHandle(handle);
2099\endcode
2100
2101You can close it any time, before or after destroying the allocation object.
2102It is reference-counted internally by Windows.
2103
2104Note the handle is returned for the entire `VkDeviceMemory` block that the allocation belongs to.
2105If the allocation is sub-allocated from a larger block, you may need to consider the offset of the allocation
2106(VmaAllocationInfo::offset).
2107
2108If the function fails with `VK_ERROR_FEATURE_NOT_PRESENT` error code, please double-check
2109that VmaVulkanFunctions::vkGetMemoryWin32HandleKHR function pointer is set, e.g. either by using `VMA_DYNAMIC_VULKAN_FUNCTIONS`
2110or by manually passing it through VmaAllocatorCreateInfo::pVulkanFunctions.
2111
2112For more information, see chapter \ref vk_khr_external_memory_win32.
2113*/
2114VMA_CALL_PRE VkResult VMA_CALL_POST vmaGetMemoryWin32Handle(VmaAllocator VMA_NOT_NULL allocator,
2115 VmaAllocation VMA_NOT_NULL allocation, HANDLE hTargetProcess, HANDLE* VMA_NOT_NULL pHandle);
2116#endif // VMA_EXTERNAL_MEMORY_WIN32
2117
2118/** \brief Maps memory represented by given allocation and returns pointer to it.
2119
2120Maps memory represented by given allocation to make it accessible to CPU code.
2121When succeeded, `*ppData` contains pointer to first byte of this memory.
2122
2123\warning
2124If the allocation is part of a bigger `VkDeviceMemory` block, returned pointer is
2125correctly offsetted to the beginning of region assigned to this particular allocation.
2126Unlike the result of `vkMapMemory`, it points to the allocation, not to the beginning of the whole block.
2127You should not add VmaAllocationInfo::offset to it!
2128
2129Mapping is internally reference-counted and synchronized, so despite raw Vulkan
2130function `vkMapMemory()` cannot be used to map same block of `VkDeviceMemory`
2131multiple times simultaneously, it is safe to call this function on allocations
2132assigned to the same memory block. Actual Vulkan memory will be mapped on first
2133mapping and unmapped on last unmapping.
2134
2135If the function succeeded, you must call vmaUnmapMemory() to unmap the
2136allocation when mapping is no longer needed or before freeing the allocation, at
2137the latest.
2138
2139It also safe to call this function multiple times on the same allocation. You
2140must call vmaUnmapMemory() same number of times as you called vmaMapMemory().
2141
2142It is also safe to call this function on allocation created with
2143#VMA_ALLOCATION_CREATE_MAPPED_BIT flag. Its memory stays mapped all the time.
2144You must still call vmaUnmapMemory() same number of times as you called
2145vmaMapMemory(). You must not call vmaUnmapMemory() additional time to free the
2146"0-th" mapping made automatically due to #VMA_ALLOCATION_CREATE_MAPPED_BIT flag.
2147
2148This function fails when used on allocation made in memory type that is not
2149`HOST_VISIBLE`.
2150
2151This function doesn't automatically flush or invalidate caches.
2152If the allocation is made from a memory types that is not `HOST_COHERENT`,
2153you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
2154*/
2155VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
2156 VmaAllocator VMA_NOT_NULL allocator,
2157 VmaAllocation VMA_NOT_NULL allocation,
2158 void* VMA_NULLABLE* VMA_NOT_NULL ppData);
2159
2160/** \brief Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
2161
2162For details, see description of vmaMapMemory().
2163
2164This function doesn't automatically flush or invalidate caches.
2165If the allocation is made from a memory types that is not `HOST_COHERENT`,
2166you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
2167*/
2168VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
2169 VmaAllocator VMA_NOT_NULL allocator,
2170 VmaAllocation VMA_NOT_NULL allocation);
2171
2172/** \brief Flushes memory of given allocation.
2173
2174Calls `vkFlushMappedMemoryRanges()` for memory associated with given range of given allocation.
2175It needs to be called after writing to a mapped memory for memory types that are not `HOST_COHERENT`.
2176Unmap operation doesn't do that automatically.
2177
2178- `offset` must be relative to the beginning of allocation.
2179- `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
2180- `offset` and `size` don't have to be aligned.
2181 They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
2182- If `size` is 0, this call is ignored.
2183- If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
2184 this call is ignored.
2185
2186Warning! `offset` and `size` are relative to the contents of given `allocation`.
2187If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
2188Do not pass allocation's offset as `offset`!!!
2189
2190This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
2191called, otherwise `VK_SUCCESS`.
2192*/
2193VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
2194 VmaAllocator VMA_NOT_NULL allocator,
2195 VmaAllocation VMA_NOT_NULL allocation,
2196 VkDeviceSize offset,
2197 VkDeviceSize size);
2198
2199/** \brief Invalidates memory of given allocation.
2200
2201Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given range of given allocation.
2202It needs to be called before reading from a mapped memory for memory types that are not `HOST_COHERENT`.
2203Map operation doesn't do that automatically.
2204
2205- `offset` must be relative to the beginning of allocation.
2206- `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
2207- `offset` and `size` don't have to be aligned.
2208 They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
2209- If `size` is 0, this call is ignored.
2210- If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
2211 this call is ignored.
2212
2213Warning! `offset` and `size` are relative to the contents of given `allocation`.
2214If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
2215Do not pass allocation's offset as `offset`!!!
2216
2217This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if
2218it is called, otherwise `VK_SUCCESS`.
2219*/
2220VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
2221 VmaAllocator VMA_NOT_NULL allocator,
2222 VmaAllocation VMA_NOT_NULL allocation,
2223 VkDeviceSize offset,
2224 VkDeviceSize size);
2225
2226/** \brief Flushes memory of given set of allocations.
2227
2228Calls `vkFlushMappedMemoryRanges()` for memory associated with given ranges of given allocations.
2229For more information, see documentation of vmaFlushAllocation().
2230
2231\param allocator
2232\param allocationCount
2233\param allocations
2234\param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all offsets are zero.
2235\param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
2236
2237This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
2238called, otherwise `VK_SUCCESS`.
2239*/
2240VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
2241 VmaAllocator VMA_NOT_NULL allocator,
2242 uint32_t allocationCount,
2243 const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
2244 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
2245 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
2246
2247/** \brief Invalidates memory of given set of allocations.
2248
2249Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given ranges of given allocations.
2250For more information, see documentation of vmaInvalidateAllocation().
2251
2252\param allocator
2253\param allocationCount
2254\param allocations
2255\param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all offsets are zero.
2256\param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
2257
2258This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if it is
2259called, otherwise `VK_SUCCESS`.
2260*/
2261VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
2262 VmaAllocator VMA_NOT_NULL allocator,
2263 uint32_t allocationCount,
2264 const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
2265 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
2266 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
2267
2268/** \brief Maps the allocation temporarily if needed, copies data from specified host pointer to it, and flushes the memory from the host caches if needed.
2269
2270\param allocator
2271\param pSrcHostPointer Pointer to the host data that become source of the copy.
2272\param dstAllocation Handle to the allocation that becomes destination of the copy.
2273\param dstAllocationLocalOffset Offset within `dstAllocation` where to write copied data, in bytes.
2274\param size Number of bytes to copy.
2275
2276This is a convenience function that allows to copy data from a host pointer to an allocation easily.
2277Same behavior can be achieved by calling vmaMapMemory(), `memcpy()`, vmaUnmapMemory(), vmaFlushAllocation().
2278
2279This function can be called only for allocations created in a memory type that has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` flag.
2280It can be ensured e.g. by using #VMA_MEMORY_USAGE_AUTO and #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or
2281#VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
2282Otherwise, the function will fail and generate a Validation Layers error.
2283
2284`dstAllocationLocalOffset` is relative to the contents of given `dstAllocation`.
2285If you mean whole allocation, you should pass 0.
2286Do not pass allocation's offset within device memory block this parameter!
2287*/
2288VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyMemoryToAllocation(
2289 VmaAllocator VMA_NOT_NULL allocator,
2290 const void* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(size) pSrcHostPointer,
2291 VmaAllocation VMA_NOT_NULL dstAllocation,
2292 VkDeviceSize dstAllocationLocalOffset,
2293 VkDeviceSize size);
2294
2295/** \brief Invalidates memory in the host caches if needed, maps the allocation temporarily if needed, and copies data from it to a specified host pointer.
2296
2297\param allocator
2298\param srcAllocation Handle to the allocation that becomes source of the copy.
2299\param srcAllocationLocalOffset Offset within `srcAllocation` where to read copied data, in bytes.
2300\param pDstHostPointer Pointer to the host memory that become destination of the copy.
2301\param size Number of bytes to copy.
2302
2303This is a convenience function that allows to copy data from an allocation to a host pointer easily.
2304Same behavior can be achieved by calling vmaInvalidateAllocation(), vmaMapMemory(), `memcpy()`, vmaUnmapMemory().
2305
2306This function should be called only for allocations created in a memory type that has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
2307and `VK_MEMORY_PROPERTY_HOST_CACHED_BIT` flag.
2308It can be ensured e.g. by using #VMA_MEMORY_USAGE_AUTO and #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
2309Otherwise, the function may fail and generate a Validation Layers error.
2310It may also work very slowly when reading from an uncached memory.
2311
2312`srcAllocationLocalOffset` is relative to the contents of given `srcAllocation`.
2313If you mean whole allocation, you should pass 0.
2314Do not pass allocation's offset within device memory block as this parameter!
2315*/
2316VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyAllocationToMemory(
2317 VmaAllocator VMA_NOT_NULL allocator,
2318 VmaAllocation VMA_NOT_NULL srcAllocation,
2319 VkDeviceSize srcAllocationLocalOffset,
2320 void* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(size) pDstHostPointer,
2321 VkDeviceSize size);
2322
2323/** \brief Checks magic number in margins around all allocations in given memory types (in both default and custom pools) in search for corruptions.
2324
2325\param allocator
2326\param memoryTypeBits Bit mask, where each bit set means that a memory type with that index should be checked.
2327
2328Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
2329`VMA_DEBUG_MARGIN` is defined to nonzero and only for memory types that are
2330`HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
2331
2332Possible return values:
2333
2334- `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for any of specified memory types.
2335- `VK_SUCCESS` - corruption detection has been performed and succeeded.
2336- `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
2337 `VMA_ASSERT` is also fired in that case.
2338- Other value: Error returned by Vulkan, e.g. memory mapping failure.
2339*/
2340VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
2341 VmaAllocator VMA_NOT_NULL allocator,
2342 uint32_t memoryTypeBits);
2343
2344/** \brief Begins defragmentation process.
2345
2346\param allocator Allocator object.
2347\param pInfo Structure filled with parameters of defragmentation.
2348\param[out] pContext Context object that must be passed to vmaEndDefragmentation() to finish defragmentation.
2349\returns
2350- `VK_SUCCESS` if defragmentation can begin.
2351- `VK_ERROR_FEATURE_NOT_PRESENT` if defragmentation is not supported.
2352
2353For more information about defragmentation, see documentation chapter:
2354[Defragmentation](@ref defragmentation).
2355*/
2356VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
2357 VmaAllocator VMA_NOT_NULL allocator,
2358 const VmaDefragmentationInfo* VMA_NOT_NULL pInfo,
2359 VmaDefragmentationContext VMA_NULLABLE* VMA_NOT_NULL pContext);
2360
2361/** \brief Ends defragmentation process.
2362
2363\param allocator Allocator object.
2364\param context Context object that has been created by vmaBeginDefragmentation().
2365\param[out] pStats Optional stats for the defragmentation. Can be null.
2366
2367Use this function to finish defragmentation started by vmaBeginDefragmentation().
2368*/
2369VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
2370 VmaAllocator VMA_NOT_NULL allocator,
2371 VmaDefragmentationContext VMA_NOT_NULL context,
2372 VmaDefragmentationStats* VMA_NULLABLE pStats);
2373
2374/** \brief Starts single defragmentation pass.
2375
2376\param allocator Allocator object.
2377\param context Context object that has been created by vmaBeginDefragmentation().
2378\param[out] pPassInfo Computed information for current pass.
2379\returns
2380- `VK_SUCCESS` if no more moves are possible. Then you can omit call to vmaEndDefragmentationPass() and simply end whole defragmentation.
2381- `VK_INCOMPLETE` if there are pending moves returned in `pPassInfo`. You need to perform them, call vmaEndDefragmentationPass(),
2382 and then preferably try another pass with vmaBeginDefragmentationPass().
2383*/
2384VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
2385 VmaAllocator VMA_NOT_NULL allocator,
2386 VmaDefragmentationContext VMA_NOT_NULL context,
2387 VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
2388
2389/** \brief Ends single defragmentation pass.
2390
2391\param allocator Allocator object.
2392\param context Context object that has been created by vmaBeginDefragmentation().
2393\param pPassInfo Computed information for current pass filled by vmaBeginDefragmentationPass() and possibly modified by you.
2394
2395Returns `VK_SUCCESS` if no more moves are possible or `VK_INCOMPLETE` if more defragmentations are possible.
2396
2397Ends incremental defragmentation pass and commits all defragmentation moves from `pPassInfo`.
2398After this call:
2399
2400- Allocations at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY
2401 (which is the default) will be pointing to the new destination place.
2402- Allocation at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY
2403 will be freed.
2404
2405If no more moves are possible you can end whole defragmentation.
2406*/
2407VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
2408 VmaAllocator VMA_NOT_NULL allocator,
2409 VmaDefragmentationContext VMA_NOT_NULL context,
2410 VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
2411
2412/** \brief Binds buffer to allocation.
2413
2414Binds specified buffer to region of memory represented by specified allocation.
2415Gets `VkDeviceMemory` handle and offset from the allocation.
2416If you want to create a buffer, allocate memory for it and bind them together separately,
2417you should use this function for binding instead of standard `vkBindBufferMemory()`,
2418because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
2419allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
2420(which is illegal in Vulkan).
2421
2422It is recommended to use function vmaCreateBuffer() instead of this one.
2423*/
2424VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
2425 VmaAllocator VMA_NOT_NULL allocator,
2426 VmaAllocation VMA_NOT_NULL allocation,
2427 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer);
2428
2429/** \brief Binds buffer to allocation with additional parameters.
2430
2431\param allocator
2432\param allocation
2433\param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
2434\param buffer
2435\param pNext A chain of structures to be attached to `VkBindBufferMemoryInfoKHR` structure used internally. Normally it should be null.
2436
2437This function is similar to vmaBindBufferMemory(), but it provides additional parameters.
2438
2439If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
2440or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
2441*/
2442VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
2443 VmaAllocator VMA_NOT_NULL allocator,
2444 VmaAllocation VMA_NOT_NULL allocation,
2445 VkDeviceSize allocationLocalOffset,
2446 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
2447 const void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkBindBufferMemoryInfoKHR) pNext);
2448
2449/** \brief Binds image to allocation.
2450
2451Binds specified image to region of memory represented by specified allocation.
2452Gets `VkDeviceMemory` handle and offset from the allocation.
2453If you want to create an image, allocate memory for it and bind them together separately,
2454you should use this function for binding instead of standard `vkBindImageMemory()`,
2455because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
2456allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
2457(which is illegal in Vulkan).
2458
2459It is recommended to use function vmaCreateImage() instead of this one.
2460*/
2461VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
2462 VmaAllocator VMA_NOT_NULL allocator,
2463 VmaAllocation VMA_NOT_NULL allocation,
2464 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image);
2465
2466/** \brief Binds image to allocation with additional parameters.
2467
2468\param allocator
2469\param allocation
2470\param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
2471\param image
2472\param pNext A chain of structures to be attached to `VkBindImageMemoryInfoKHR` structure used internally. Normally it should be null.
2473
2474This function is similar to vmaBindImageMemory(), but it provides additional parameters.
2475
2476If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
2477or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
2478*/
2479VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
2480 VmaAllocator VMA_NOT_NULL allocator,
2481 VmaAllocation VMA_NOT_NULL allocation,
2482 VkDeviceSize allocationLocalOffset,
2483 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
2484 const void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkBindImageMemoryInfoKHR) pNext);
2485
2486/** \brief Creates a new `VkBuffer`, allocates and binds memory for it.
2487
2488\param allocator
2489\param pBufferCreateInfo
2490\param pAllocationCreateInfo
2491\param[out] pBuffer Buffer that was created.
2492\param[out] pAllocation Allocation that was created.
2493\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
2494
2495This function automatically:
2496
2497-# Creates buffer.
2498-# Allocates appropriate memory for it.
2499-# Binds the buffer with the memory.
2500
2501If any of these operations fail, buffer and allocation are not created,
2502returned value is negative error code, `*pBuffer` and `*pAllocation` are null.
2503
2504If the function succeeded, you must destroy both buffer and allocation when you
2505no longer need them using either convenience function vmaDestroyBuffer() or
2506separately, using `vkDestroyBuffer()` and vmaFreeMemory().
2507
2508If #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag was used,
2509VK_KHR_dedicated_allocation extension is used internally to query driver whether
2510it requires or prefers the new buffer to have dedicated allocation. If yes,
2511and if dedicated allocation is possible
2512(#VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT is not used), it creates dedicated
2513allocation for this buffer, just like when using
2514#VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
2515
2516\note This function creates a new `VkBuffer`. Sub-allocation of parts of one large buffer,
2517although recommended as a good practice, is out of scope of this library and could be implemented
2518by the user as a higher-level logic on top of VMA.
2519*/
2520VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
2521 VmaAllocator VMA_NOT_NULL allocator,
2522 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
2523 const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
2524 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
2525 VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
2526 VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
2527
2528/** \brief Creates a buffer with additional minimum alignment.
2529
2530Similar to vmaCreateBuffer() but provides additional parameter `minAlignment` which allows to specify custom,
2531minimum alignment to be used when placing the buffer inside a larger memory block, which may be needed e.g.
2532for interop with OpenGL.
2533*/
2534VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
2535 VmaAllocator VMA_NOT_NULL allocator,
2536 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
2537 const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
2538 VkDeviceSize minAlignment,
2539 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
2540 VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
2541 VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
2542
2543/** \brief Creates a new `VkBuffer`, binds already created memory for it.
2544
2545\param allocator
2546\param allocation Allocation that provides memory to be used for binding new buffer to it.
2547\param pBufferCreateInfo
2548\param[out] pBuffer Buffer that was created.
2549
2550This function automatically:
2551
2552-# Creates buffer.
2553-# Binds the buffer with the supplied memory.
2554
2555If any of these operations fail, buffer is not created,
2556returned value is negative error code and `*pBuffer` is null.
2557
2558If the function succeeded, you must destroy the buffer when you
2559no longer need it using `vkDestroyBuffer()`. If you want to also destroy the corresponding
2560allocation you can use convenience function vmaDestroyBuffer().
2561
2562\note There is a new version of this function augmented with parameter `allocationLocalOffset` - see vmaCreateAliasingBuffer2().
2563*/
2564VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
2565 VmaAllocator VMA_NOT_NULL allocator,
2566 VmaAllocation VMA_NOT_NULL allocation,
2567 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
2568 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer);
2569
2570/** \brief Creates a new `VkBuffer`, binds already created memory for it.
2571
2572\param allocator
2573\param allocation Allocation that provides memory to be used for binding new buffer to it.
2574\param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the allocation. Normally it should be 0.
2575\param pBufferCreateInfo
2576\param[out] pBuffer Buffer that was created.
2577
2578This function automatically:
2579
2580-# Creates buffer.
2581-# Binds the buffer with the supplied memory.
2582
2583If any of these operations fail, buffer is not created,
2584returned value is negative error code and `*pBuffer` is null.
2585
2586If the function succeeded, you must destroy the buffer when you
2587no longer need it using `vkDestroyBuffer()`. If you want to also destroy the corresponding
2588allocation you can use convenience function vmaDestroyBuffer().
2589
2590\note This is a new version of the function augmented with parameter `allocationLocalOffset`.
2591*/
2592VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer2(
2593 VmaAllocator VMA_NOT_NULL allocator,
2594 VmaAllocation VMA_NOT_NULL allocation,
2595 VkDeviceSize allocationLocalOffset,
2596 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
2597 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer);
2598
2599/** \brief Destroys Vulkan buffer and frees allocated memory.
2600
2601This is just a convenience function equivalent to:
2602
2603\code
2604vkDestroyBuffer(device, buffer, allocationCallbacks);
2605vmaFreeMemory(allocator, allocation);
2606\endcode
2607
2608It is safe to pass null as buffer and/or allocation.
2609*/
2610VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
2611 VmaAllocator VMA_NOT_NULL allocator,
2612 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE buffer,
2613 VmaAllocation VMA_NULLABLE allocation);
2614
2615/// Function similar to vmaCreateBuffer().
2616VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
2617 VmaAllocator VMA_NOT_NULL allocator,
2618 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
2619 const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
2620 VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage,
2621 VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
2622 VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
2623
2624/// Function similar to vmaCreateAliasingBuffer() but for images.
2625VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
2626 VmaAllocator VMA_NOT_NULL allocator,
2627 VmaAllocation VMA_NOT_NULL allocation,
2628 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
2629 VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage);
2630
2631/// Function similar to vmaCreateAliasingBuffer2() but for images.
2632VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage2(
2633 VmaAllocator VMA_NOT_NULL allocator,
2634 VmaAllocation VMA_NOT_NULL allocation,
2635 VkDeviceSize allocationLocalOffset,
2636 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
2637 VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage);
2638
2639/** \brief Destroys Vulkan image and frees allocated memory.
2640
2641This is just a convenience function equivalent to:
2642
2643\code
2644vkDestroyImage(device, image, allocationCallbacks);
2645vmaFreeMemory(allocator, allocation);
2646\endcode
2647
2648It is safe to pass null as image and/or allocation.
2649*/
2650VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
2651 VmaAllocator VMA_NOT_NULL allocator,
2652 VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
2653 VmaAllocation VMA_NULLABLE allocation);
2654
2655/** @} */
2656
2657/**
2658\addtogroup group_virtual
2659@{
2660*/
2661
2662/** \brief Creates new #VmaVirtualBlock object.
2663
2664\param pCreateInfo Parameters for creation.
2665\param[out] pVirtualBlock Returned virtual block object or `VMA_NULL` if creation failed.
2666*/
2667VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
2668 const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
2669 VmaVirtualBlock VMA_NULLABLE* VMA_NOT_NULL pVirtualBlock);
2670
2671/** \brief Destroys #VmaVirtualBlock object.
2672
2673Please note that you should consciously handle virtual allocations that could remain unfreed in the block.
2674You should either free them individually using vmaVirtualFree() or call vmaClearVirtualBlock()
2675if you are sure this is what you want. If you do neither, an assert is called.
2676
2677If you keep pointers to some additional metadata associated with your virtual allocations in their `pUserData`,
2678don't forget to free them.
2679*/
2680VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(
2681 VmaVirtualBlock VMA_NULLABLE virtualBlock);
2682
2683/** \brief Returns true of the #VmaVirtualBlock is empty - contains 0 virtual allocations and has all its space available for new allocations.
2684*/
2685VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(
2686 VmaVirtualBlock VMA_NOT_NULL virtualBlock);
2687
2688/** \brief Returns information about a specific virtual allocation within a virtual block, like its size and `pUserData` pointer.
2689*/
2690VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(
2691 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2692 VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo);
2693
2694/** \brief Allocates new virtual allocation inside given #VmaVirtualBlock.
2695
2696If the allocation fails due to not enough free space available, `VK_ERROR_OUT_OF_DEVICE_MEMORY` is returned
2697(despite the function doesn't ever allocate actual GPU memory).
2698`pAllocation` is then set to `VK_NULL_HANDLE` and `pOffset`, if not null, it set to `UINT64_MAX`.
2699
2700\param virtualBlock Virtual block
2701\param pCreateInfo Parameters for the allocation
2702\param[out] pAllocation Returned handle of the new allocation
2703\param[out] pOffset Returned offset of the new allocation. Optional, can be null.
2704*/
2705VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(
2706 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2707 const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
2708 VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
2709 VkDeviceSize* VMA_NULLABLE pOffset);
2710
2711/** \brief Frees virtual allocation inside given #VmaVirtualBlock.
2712
2713It is correct to call this function with `allocation == VK_NULL_HANDLE` - it does nothing.
2714*/
2715VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(
2716 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2717 VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation);
2718
2719/** \brief Frees all virtual allocations inside given #VmaVirtualBlock.
2720
2721You must either call this function or free each virtual allocation individually with vmaVirtualFree()
2722before destroying a virtual block. Otherwise, an assert is called.
2723
2724If you keep pointer to some additional metadata associated with your virtual allocation in its `pUserData`,
2725don't forget to free it as well.
2726*/
2727VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(
2728 VmaVirtualBlock VMA_NOT_NULL virtualBlock);
2729
2730/** \brief Changes custom pointer associated with given virtual allocation.
2731*/
2732VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(
2733 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2734 VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation,
2735 void* VMA_NULLABLE pUserData);
2736
2737/** \brief Calculates and returns statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
2738
2739This function is fast to call. For more detailed statistics, see vmaCalculateVirtualBlockStatistics().
2740*/
2741VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(
2742 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2743 VmaStatistics* VMA_NOT_NULL pStats);
2744
2745/** \brief Calculates and returns detailed statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
2746
2747This function is slow to call. Use for debugging purposes.
2748For less detailed statistics, see vmaGetVirtualBlockStatistics().
2749*/
2750VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(
2751 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2752 VmaDetailedStatistics* VMA_NOT_NULL pStats);
2753
2754/** @} */
2755
2756#if VMA_STATS_STRING_ENABLED
2757/**
2758\addtogroup group_stats
2759@{
2760*/
2761
2762/** \brief Builds and returns a null-terminated string in JSON format with information about given #VmaVirtualBlock.
2763\param virtualBlock Virtual block.
2764\param[out] ppStatsString Returned string.
2765\param detailedMap Pass `VK_FALSE` to only obtain statistics as returned by vmaCalculateVirtualBlockStatistics(). Pass `VK_TRUE` to also obtain full list of allocations and free spaces.
2766
2767Returned string must be freed using vmaFreeVirtualBlockStatsString().
2768*/
2769VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(
2770 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2771 char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
2772 VkBool32 detailedMap);
2773
2774/// Frees a string returned by vmaBuildVirtualBlockStatsString().
2775VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(
2776 VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2777 char* VMA_NULLABLE pStatsString);
2778
2779/** \brief Builds and returns statistics as a null-terminated string in JSON format.
2780\param allocator
2781\param[out] ppStatsString Must be freed using vmaFreeStatsString() function.
2782\param detailedMap
2783*/
2784VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
2785 VmaAllocator VMA_NOT_NULL allocator,
2786 char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
2787 VkBool32 detailedMap);
2788
2789VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
2790 VmaAllocator VMA_NOT_NULL allocator,
2791 char* VMA_NULLABLE pStatsString);
2792
2793/** @} */
2794
2795#endif // VMA_STATS_STRING_ENABLED
2796
2797#endif // _VMA_FUNCTION_HEADERS
2798
2799#ifdef __cplusplus
2800}
2801#endif
2802
2803#endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
2804
2805////////////////////////////////////////////////////////////////////////////////
2806////////////////////////////////////////////////////////////////////////////////
2807//
2808// IMPLEMENTATION
2809//
2810////////////////////////////////////////////////////////////////////////////////
2811////////////////////////////////////////////////////////////////////////////////
2812
2813// For Visual Studio IntelliSense.
2814#if defined(__cplusplus) && defined(__INTELLISENSE__)
2815#define VMA_IMPLEMENTATION
2816#endif
2817
2818#ifdef VMA_IMPLEMENTATION
2819#undef VMA_IMPLEMENTATION
2820
2821#if defined(__GNUC__) && !defined(__clang__)
2822#pragma GCC diagnostic push
2823#pragma GCC diagnostic ignored "-Wunused-variable"
2824#pragma GCC diagnostic ignored "-Wunused-parameter"
2825#pragma GCC diagnostic ignored "-Wmissing-field-initializers"
2826#pragma GCC diagnostic ignored "-Wparentheses"
2827#pragma GCC diagnostic ignored "-Wimplicit-fallthrough"
2828#elif defined(__clang__)
2829#pragma clang diagnostic push
2830#pragma clang diagnostic ignored "-Wunused-variable"
2831#pragma clang diagnostic ignored "-Wunused-parameter"
2832#pragma clang diagnostic ignored "-Wunused-private-field"
2833#pragma clang diagnostic ignored "-Wmissing-field-initializers"
2834#pragma clang diagnostic ignored "-Wparentheses"
2835#pragma clang diagnostic ignored "-Wimplicit-fallthrough"
2836#pragma clang diagnostic ignored "-Wnullability-completeness"
2837#endif
2838
2839#include <cstdint>
2840#include <cstdlib>
2841#include <cstring>
2842#include <cinttypes>
2843#include <utility>
2844#include <type_traits>
2845
2846#if !defined(VMA_CPP20)
2847 #if __cplusplus >= 202002L || _MSVC_LANG >= 202002L // C++20
2848 #define VMA_CPP20 1
2849 #else
2850 #define VMA_CPP20 0
2851 #endif
2852#endif
2853
2854#ifdef _MSC_VER
2855 #include <intrin.h> // For functions like __popcnt, _BitScanForward etc.
2856#endif
2857#if VMA_CPP20
2858 #include <bit>
2859#endif
2860
2861#if VMA_STATS_STRING_ENABLED
2862 #include <cstdio> // For snprintf
2863#endif
2864
2865/*******************************************************************************
2866CONFIGURATION SECTION
2867
2868Define some of these macros before each #include of this header or change them
2869here if you need other then default behavior depending on your environment.
2870*/
2871#ifndef _VMA_CONFIGURATION
2872
2873/*
2874Define this macro to 1 to make the library fetch pointers to Vulkan functions
2875internally, like:
2876
2877 vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
2878*/
2879#if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
2880 #define VMA_STATIC_VULKAN_FUNCTIONS 1
2881#endif
2882
2883/*
2884Define this macro to 1 to make the library fetch pointers to Vulkan functions
2885internally, like:
2886
2887 vulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkGetDeviceProcAddr(device, "vkAllocateMemory");
2888
2889To use this feature in new versions of VMA you now have to pass
2890VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as
2891VmaAllocatorCreateInfo::pVulkanFunctions. Other members can be null.
2892*/
2893#if !defined(VMA_DYNAMIC_VULKAN_FUNCTIONS)
2894 #define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
2895#endif
2896
2897#ifndef VMA_USE_STL_SHARED_MUTEX
2898 #if __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
2899 #define VMA_USE_STL_SHARED_MUTEX 1
2900 // Visual studio defines __cplusplus properly only when passed additional parameter: /Zc:__cplusplus
2901 // Otherwise it is always 199711L, despite shared_mutex works since Visual Studio 2015 Update 2.
2902 #elif defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023918 && __cplusplus == 199711L && _MSVC_LANG >= 201703L
2903 #define VMA_USE_STL_SHARED_MUTEX 1
2904 #else
2905 #define VMA_USE_STL_SHARED_MUTEX 0
2906 #endif
2907#endif
2908
2909/*
2910Define this macro to include custom header files without having to edit this file directly, e.g.:
2911
2912 // Inside of "my_vma_configuration_user_includes.h":
2913
2914 #include "my_custom_assert.h" // for MY_CUSTOM_ASSERT
2915 #include "my_custom_min.h" // for my_custom_min
2916 #include <algorithm>
2917 #include <mutex>
2918
2919 // Inside a different file, which includes "vk_mem_alloc.h":
2920
2921 #define VMA_CONFIGURATION_USER_INCLUDES_H "my_vma_configuration_user_includes.h"
2922 #define VMA_ASSERT(expr) MY_CUSTOM_ASSERT(expr)
2923 #define VMA_MIN(v1, v2) (my_custom_min(v1, v2))
2924 #include "vk_mem_alloc.h"
2925 ...
2926
2927The following headers are used in this CONFIGURATION section only, so feel free to
2928remove them if not needed.
2929*/
2930#if !defined(VMA_CONFIGURATION_USER_INCLUDES_H)
2931 #include <cassert> // for assert
2932 #include <algorithm> // for min, max, swap
2933 #include <mutex>
2934#else
2935 #include VMA_CONFIGURATION_USER_INCLUDES_H
2936#endif
2937
2938#ifndef VMA_NULL
2939 // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
2940 #define VMA_NULL nullptr
2941#endif
2942
2943#ifndef VMA_FALLTHROUGH
2944 #if __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
2945 #define VMA_FALLTHROUGH [[fallthrough]]
2946 #else
2947 #define VMA_FALLTHROUGH
2948 #endif
2949#endif
2950
2951// Normal assert to check for programmer's errors, especially in Debug configuration.
2952#ifndef VMA_ASSERT
2953 #ifdef NDEBUG
2954 #define VMA_ASSERT(expr)
2955 #else
2956 #define VMA_ASSERT(expr) assert(expr)
2957 #endif
2958#endif
2959
2960// Assert that will be called very often, like inside data structures e.g. operator[].
2961// Making it non-empty can make program slow.
2962#ifndef VMA_HEAVY_ASSERT
2963 #ifdef NDEBUG
2964 #define VMA_HEAVY_ASSERT(expr)
2965 #else
2966 #define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
2967 #endif
2968#endif
2969
2970// Assert used for reporting memory leaks - unfreed allocations.
2971#ifndef VMA_ASSERT_LEAK
2972 #define VMA_ASSERT_LEAK(expr) VMA_ASSERT(expr)
2973#endif
2974
2975// If your compiler is not compatible with C++17 and definition of
2976// aligned_alloc() function is missing, uncommenting following line may help:
2977
2978//#include <malloc.h>
2979
2980#if defined(__ANDROID_API__) && (__ANDROID_API__ < 16)
2981#include <cstdlib>
2982static void* vma_aligned_alloc(size_t alignment, size_t size)
2983{
2984 // alignment must be >= sizeof(void*)
2985 if(alignment < sizeof(void*))
2986 {
2987 alignment = sizeof(void*);
2988 }
2989
2990 return memalign(alignment, size);
2991}
2992#elif defined(__APPLE__) || defined(__ANDROID__) || (defined(__linux__) && defined(__GLIBCXX__) && !defined(_GLIBCXX_HAVE_ALIGNED_ALLOC))
2993#include <cstdlib>
2994
2995#if defined(__APPLE__)
2996#include <AvailabilityMacros.h>
2997#endif
2998
2999static void* vma_aligned_alloc(size_t alignment, size_t size)
3000{
3001 // Unfortunately, aligned_alloc causes VMA to crash due to it returning null pointers. (At least under 11.4)
3002 // Therefore, for now disable this specific exception until a proper solution is found.
3003 //#if defined(__APPLE__) && (defined(MAC_OS_X_VERSION_10_16) || defined(__IPHONE_14_0))
3004 //#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_16 || __IPHONE_OS_VERSION_MAX_ALLOWED >= __IPHONE_14_0
3005 // // For C++14, usr/include/malloc/_malloc.h declares aligned_alloc()) only
3006 // // with the MacOSX11.0 SDK in Xcode 12 (which is what adds
3007 // // MAC_OS_X_VERSION_10_16), even though the function is marked
3008 // // available for 10.15. That is why the preprocessor checks for 10.16 but
3009 // // the __builtin_available checks for 10.15.
3010 // // People who use C++17 could call aligned_alloc with the 10.15 SDK already.
3011 // if (__builtin_available(macOS 10.15, iOS 13, *))
3012 // return aligned_alloc(alignment, size);
3013 //#endif
3014 //#endif
3015
3016 // alignment must be >= sizeof(void*)
3017 if(alignment < sizeof(void*))
3018 {
3019 alignment = sizeof(void*);
3020 }
3021
3022 void *pointer;
3023 if(posix_memalign(&pointer, alignment, size) == 0)
3024 return pointer;
3025 return VMA_NULL;
3026}
3027#elif defined(_WIN32)
3028static void* vma_aligned_alloc(size_t alignment, size_t size)
3029{
3030 return _aligned_malloc(size, alignment);
3031}
3032#elif __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
3033static void* vma_aligned_alloc(size_t alignment, size_t size)
3034{
3035 return aligned_alloc(alignment: alignment, size: size);
3036}
3037#else
3038static void* vma_aligned_alloc(size_t alignment, size_t size)
3039{
3040 VMA_ASSERT(0 && "Could not implement aligned_alloc automatically. Please enable C++17 or later in your compiler or provide custom implementation of macro VMA_SYSTEM_ALIGNED_MALLOC (and VMA_SYSTEM_ALIGNED_FREE if needed) using the API of your system.");
3041 return VMA_NULL;
3042}
3043#endif
3044
3045#if defined(_WIN32)
3046static void vma_aligned_free(void* ptr)
3047{
3048 _aligned_free(ptr);
3049}
3050#else
3051static void vma_aligned_free(void* VMA_NULLABLE ptr)
3052{
3053 free(ptr: ptr);
3054}
3055#endif
3056
3057#ifndef VMA_ALIGN_OF
3058 #define VMA_ALIGN_OF(type) (alignof(type))
3059#endif
3060
3061#ifndef VMA_SYSTEM_ALIGNED_MALLOC
3062 #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) vma_aligned_alloc((alignment), (size))
3063#endif
3064
3065#ifndef VMA_SYSTEM_ALIGNED_FREE
3066 // VMA_SYSTEM_FREE is the old name, but might have been defined by the user
3067 #if defined(VMA_SYSTEM_FREE)
3068 #define VMA_SYSTEM_ALIGNED_FREE(ptr) VMA_SYSTEM_FREE(ptr)
3069 #else
3070 #define VMA_SYSTEM_ALIGNED_FREE(ptr) vma_aligned_free(ptr)
3071 #endif
3072#endif
3073
3074#ifndef VMA_COUNT_BITS_SET
3075 // Returns number of bits set to 1 in (v)
3076 #define VMA_COUNT_BITS_SET(v) VmaCountBitsSet(v)
3077#endif
3078
3079#ifndef VMA_BITSCAN_LSB
3080 // Scans integer for index of first nonzero value from the Least Significant Bit (LSB). If mask is 0 then returns UINT8_MAX
3081 #define VMA_BITSCAN_LSB(mask) VmaBitScanLSB(mask)
3082#endif
3083
3084#ifndef VMA_BITSCAN_MSB
3085 // Scans integer for index of first nonzero value from the Most Significant Bit (MSB). If mask is 0 then returns UINT8_MAX
3086 #define VMA_BITSCAN_MSB(mask) VmaBitScanMSB(mask)
3087#endif
3088
3089#ifndef VMA_MIN
3090 #define VMA_MIN(v1, v2) ((std::min)((v1), (v2)))
3091#endif
3092
3093#ifndef VMA_MAX
3094 #define VMA_MAX(v1, v2) ((std::max)((v1), (v2)))
3095#endif
3096
3097#ifndef VMA_SORT
3098 #define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
3099#endif
3100
3101#ifndef VMA_DEBUG_LOG_FORMAT
3102 #define VMA_DEBUG_LOG_FORMAT(format, ...)
3103 /*
3104 #define VMA_DEBUG_LOG_FORMAT(format, ...) do { \
3105 printf((format), __VA_ARGS__); \
3106 printf("\n"); \
3107 } while(false)
3108 */
3109#endif
3110
3111#ifndef VMA_DEBUG_LOG
3112 #define VMA_DEBUG_LOG(str) VMA_DEBUG_LOG_FORMAT("%s", (str))
3113#endif
3114
3115#ifndef VMA_LEAK_LOG_FORMAT
3116 #define VMA_LEAK_LOG_FORMAT(format, ...) VMA_DEBUG_LOG_FORMAT(format, __VA_ARGS__)
3117#endif
3118
3119#ifndef VMA_CLASS_NO_COPY
3120 #define VMA_CLASS_NO_COPY(className) \
3121 private: \
3122 className(const className&) = delete; \
3123 className& operator=(const className&) = delete;
3124#endif
3125#ifndef VMA_CLASS_NO_COPY_NO_MOVE
3126 #define VMA_CLASS_NO_COPY_NO_MOVE(className) \
3127 private: \
3128 className(const className&) = delete; \
3129 className(className&&) = delete; \
3130 className& operator=(const className&) = delete; \
3131 className& operator=(className&&) = delete;
3132#endif
3133
3134// Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
3135#if VMA_STATS_STRING_ENABLED
3136 static inline void VmaUint32ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint32_t num)
3137 {
3138 snprintf(s: outStr, maxlen: strLen, format: "%" PRIu32, num);
3139 }
3140 static inline void VmaUint64ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint64_t num)
3141 {
3142 snprintf(s: outStr, maxlen: strLen, format: "%" PRIu64, num);
3143 }
3144 static inline void VmaPtrToStr(char* VMA_NOT_NULL outStr, size_t strLen, const void* ptr)
3145 {
3146 snprintf(s: outStr, maxlen: strLen, format: "%p", ptr);
3147 }
3148#endif
3149
3150#ifndef VMA_MUTEX
3151 class VmaMutex
3152 {
3153 VMA_CLASS_NO_COPY_NO_MOVE(VmaMutex)
3154 public:
3155 VmaMutex() { }
3156 void Lock() { m_Mutex.lock(); }
3157 void Unlock() { m_Mutex.unlock(); }
3158 bool TryLock() { return m_Mutex.try_lock(); }
3159 private:
3160 std::mutex m_Mutex;
3161 };
3162 #define VMA_MUTEX VmaMutex
3163#endif
3164
3165// Read-write mutex, where "read" is shared access, "write" is exclusive access.
3166#ifndef VMA_RW_MUTEX
3167 #if VMA_USE_STL_SHARED_MUTEX
3168 // Use std::shared_mutex from C++17.
3169 #include <shared_mutex>
3170 class VmaRWMutex
3171 {
3172 public:
3173 void LockRead() { m_Mutex.lock_shared(); }
3174 void UnlockRead() { m_Mutex.unlock_shared(); }
3175 bool TryLockRead() { return m_Mutex.try_lock_shared(); }
3176 void LockWrite() { m_Mutex.lock(); }
3177 void UnlockWrite() { m_Mutex.unlock(); }
3178 bool TryLockWrite() { return m_Mutex.try_lock(); }
3179 private:
3180 std::shared_mutex m_Mutex;
3181 };
3182 #define VMA_RW_MUTEX VmaRWMutex
3183 #elif defined(_WIN32) && defined(WINVER) && defined(SRWLOCK_INIT) && WINVER >= 0x0600
3184 // Use SRWLOCK from WinAPI.
3185 // Minimum supported client = Windows Vista, server = Windows Server 2008.
3186 class VmaRWMutex
3187 {
3188 public:
3189 VmaRWMutex() { InitializeSRWLock(&m_Lock); }
3190 void LockRead() { AcquireSRWLockShared(&m_Lock); }
3191 void UnlockRead() { ReleaseSRWLockShared(&m_Lock); }
3192 bool TryLockRead() { return TryAcquireSRWLockShared(&m_Lock) != FALSE; }
3193 void LockWrite() { AcquireSRWLockExclusive(&m_Lock); }
3194 void UnlockWrite() { ReleaseSRWLockExclusive(&m_Lock); }
3195 bool TryLockWrite() { return TryAcquireSRWLockExclusive(&m_Lock) != FALSE; }
3196 private:
3197 SRWLOCK m_Lock;
3198 };
3199 #define VMA_RW_MUTEX VmaRWMutex
3200 #else
3201 // Less efficient fallback: Use normal mutex.
3202 class VmaRWMutex
3203 {
3204 public:
3205 void LockRead() { m_Mutex.Lock(); }
3206 void UnlockRead() { m_Mutex.Unlock(); }
3207 bool TryLockRead() { return m_Mutex.TryLock(); }
3208 void LockWrite() { m_Mutex.Lock(); }
3209 void UnlockWrite() { m_Mutex.Unlock(); }
3210 bool TryLockWrite() { return m_Mutex.TryLock(); }
3211 private:
3212 VMA_MUTEX m_Mutex;
3213 };
3214 #define VMA_RW_MUTEX VmaRWMutex
3215 #endif // #if VMA_USE_STL_SHARED_MUTEX
3216#endif // #ifndef VMA_RW_MUTEX
3217
3218/*
3219If providing your own implementation, you need to implement a subset of std::atomic.
3220*/
3221#ifndef VMA_ATOMIC_UINT32
3222 #include <atomic>
3223 #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
3224#endif
3225
3226#ifndef VMA_ATOMIC_UINT64
3227 #include <atomic>
3228 #define VMA_ATOMIC_UINT64 std::atomic<uint64_t>
3229#endif
3230
3231#ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
3232 /**
3233 Every allocation will have its own memory block.
3234 Define to 1 for debugging purposes only.
3235 */
3236 #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
3237#endif
3238
3239#ifndef VMA_MIN_ALIGNMENT
3240 /**
3241 Minimum alignment of all allocations, in bytes.
3242 Set to more than 1 for debugging purposes. Must be power of two.
3243 */
3244 #ifdef VMA_DEBUG_ALIGNMENT // Old name
3245 #define VMA_MIN_ALIGNMENT VMA_DEBUG_ALIGNMENT
3246 #else
3247 #define VMA_MIN_ALIGNMENT (1)
3248 #endif
3249#endif
3250
3251#ifndef VMA_DEBUG_MARGIN
3252 /**
3253 Minimum margin after every allocation, in bytes.
3254 Set nonzero for debugging purposes only.
3255 */
3256 #define VMA_DEBUG_MARGIN (0)
3257#endif
3258
3259#ifndef VMA_DEBUG_INITIALIZE_ALLOCATIONS
3260 /**
3261 Define this macro to 1 to automatically fill new allocations and destroyed
3262 allocations with some bit pattern.
3263 */
3264 #define VMA_DEBUG_INITIALIZE_ALLOCATIONS (0)
3265#endif
3266
3267#ifndef VMA_DEBUG_DETECT_CORRUPTION
3268 /**
3269 Define this macro to 1 together with non-zero value of VMA_DEBUG_MARGIN to
3270 enable writing magic value to the margin after every allocation and
3271 validating it, so that memory corruptions (out-of-bounds writes) are detected.
3272 */
3273 #define VMA_DEBUG_DETECT_CORRUPTION (0)
3274#endif
3275
3276#ifndef VMA_DEBUG_GLOBAL_MUTEX
3277 /**
3278 Set this to 1 for debugging purposes only, to enable single mutex protecting all
3279 entry calls to the library. Can be useful for debugging multithreading issues.
3280 */
3281 #define VMA_DEBUG_GLOBAL_MUTEX (0)
3282#endif
3283
3284#ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
3285 /**
3286 Minimum value for VkPhysicalDeviceLimits::bufferImageGranularity.
3287 Set to more than 1 for debugging purposes only. Must be power of two.
3288 */
3289 #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
3290#endif
3291
3292#ifndef VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
3293 /*
3294 Set this to 1 to make VMA never exceed VkPhysicalDeviceLimits::maxMemoryAllocationCount
3295 and return error instead of leaving up to Vulkan implementation what to do in such cases.
3296 */
3297 #define VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT (0)
3298#endif
3299
3300#ifndef VMA_SMALL_HEAP_MAX_SIZE
3301 /// Maximum size of a memory heap in Vulkan to consider it "small".
3302 #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
3303#endif
3304
3305#ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
3306 /// Default size of a block allocated as single VkDeviceMemory from a "large" heap.
3307 #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
3308#endif
3309
3310/*
3311Mapping hysteresis is a logic that launches when vmaMapMemory/vmaUnmapMemory is called
3312or a persistently mapped allocation is created and destroyed several times in a row.
3313It keeps additional +1 mapping of a device memory block to prevent calling actual
3314vkMapMemory/vkUnmapMemory too many times, which may improve performance and help
3315tools like RenderDoc.
3316*/
3317#ifndef VMA_MAPPING_HYSTERESIS_ENABLED
3318 #define VMA_MAPPING_HYSTERESIS_ENABLED 1
3319#endif
3320
3321#define VMA_VALIDATE(cond) do { if(!(cond)) { \
3322 VMA_ASSERT(0 && "Validation failed: " #cond); \
3323 return false; \
3324 } } while(false)
3325
3326/*******************************************************************************
3327END OF CONFIGURATION
3328*/
3329#endif // _VMA_CONFIGURATION
3330
3331
3332static const uint8_t VMA_ALLOCATION_FILL_PATTERN_CREATED = 0xDC;
3333static const uint8_t VMA_ALLOCATION_FILL_PATTERN_DESTROYED = 0xEF;
3334// Decimal 2139416166, float NaN, little-endian binary 66 E6 84 7F.
3335static const uint32_t VMA_CORRUPTION_DETECTION_MAGIC_VALUE = 0x7F84E666;
3336
3337// Copy of some Vulkan definitions so we don't need to check their existence just to handle few constants.
3338static const uint32_t VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY = 0x00000040;
3339static const uint32_t VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY = 0x00000080;
3340static const uint32_t VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY = 0x00020000;
3341static const uint32_t VK_IMAGE_CREATE_DISJOINT_BIT_COPY = 0x00000200;
3342static const int32_t VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY = 1000158000;
3343static const uint32_t VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET = 0x10000000u;
3344static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
3345static const uint32_t VMA_VENDOR_ID_AMD = 4098;
3346
3347// This one is tricky. Vulkan specification defines this code as available since
3348// Vulkan 1.0, but doesn't actually define it in Vulkan SDK earlier than 1.2.131.
3349// See pull request #207.
3350#define VK_ERROR_UNKNOWN_COPY ((VkResult)-13)
3351
3352
3353#if VMA_STATS_STRING_ENABLED
3354// Correspond to values of enum VmaSuballocationType.
3355static const char* VMA_SUBALLOCATION_TYPE_NAMES[] =
3356{
3357 "FREE",
3358 "UNKNOWN",
3359 "BUFFER",
3360 "IMAGE_UNKNOWN",
3361 "IMAGE_LINEAR",
3362 "IMAGE_OPTIMAL",
3363};
3364#endif
3365
3366static VkAllocationCallbacks VmaEmptyAllocationCallbacks =
3367 { VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
3368
3369
3370#ifndef _VMA_ENUM_DECLARATIONS
3371
3372enum VmaSuballocationType
3373{
3374 VMA_SUBALLOCATION_TYPE_FREE = 0,
3375 VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
3376 VMA_SUBALLOCATION_TYPE_BUFFER = 2,
3377 VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
3378 VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
3379 VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
3380 VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
3381};
3382
3383enum VMA_CACHE_OPERATION
3384{
3385 VMA_CACHE_FLUSH,
3386 VMA_CACHE_INVALIDATE
3387};
3388
3389enum class VmaAllocationRequestType
3390{
3391 Normal,
3392 TLSF,
3393 // Used by "Linear" algorithm.
3394 UpperAddress,
3395 EndOf1st,
3396 EndOf2nd,
3397};
3398
3399#endif // _VMA_ENUM_DECLARATIONS
3400
3401#ifndef _VMA_FORWARD_DECLARATIONS
3402// Opaque handle used by allocation algorithms to identify single allocation in any conforming way.
3403VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaAllocHandle);
3404
3405struct VmaMutexLock;
3406struct VmaMutexLockRead;
3407struct VmaMutexLockWrite;
3408
3409template<typename T>
3410struct AtomicTransactionalIncrement;
3411
3412template<typename T>
3413struct VmaStlAllocator;
3414
3415template<typename T, typename AllocatorT>
3416class VmaVector;
3417
3418template<typename T, typename AllocatorT, size_t N>
3419class VmaSmallVector;
3420
3421template<typename T>
3422class VmaPoolAllocator;
3423
3424template<typename T>
3425struct VmaListItem;
3426
3427template<typename T>
3428class VmaRawList;
3429
3430template<typename T, typename AllocatorT>
3431class VmaList;
3432
3433template<typename ItemTypeTraits>
3434class VmaIntrusiveLinkedList;
3435
3436#if VMA_STATS_STRING_ENABLED
3437class VmaStringBuilder;
3438class VmaJsonWriter;
3439#endif
3440
3441class VmaDeviceMemoryBlock;
3442
3443struct VmaDedicatedAllocationListItemTraits;
3444class VmaDedicatedAllocationList;
3445
3446struct VmaSuballocation;
3447struct VmaSuballocationOffsetLess;
3448struct VmaSuballocationOffsetGreater;
3449struct VmaSuballocationItemSizeLess;
3450
3451typedef VmaList<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> VmaSuballocationList;
3452
3453struct VmaAllocationRequest;
3454
3455class VmaBlockMetadata;
3456class VmaBlockMetadata_Linear;
3457class VmaBlockMetadata_TLSF;
3458
3459class VmaBlockVector;
3460
3461struct VmaPoolListItemTraits;
3462
3463struct VmaCurrentBudgetData;
3464
3465class VmaAllocationObjectAllocator;
3466
3467#endif // _VMA_FORWARD_DECLARATIONS
3468
3469
3470#ifndef _VMA_FUNCTIONS
3471
3472/*
3473Returns number of bits set to 1 in (v).
3474
3475On specific platforms and compilers you can use intrinsics like:
3476
3477Visual Studio:
3478 return __popcnt(v);
3479GCC, Clang:
3480 return static_cast<uint32_t>(__builtin_popcount(v));
3481
3482Define macro VMA_COUNT_BITS_SET to provide your optimized implementation.
3483But you need to check in runtime whether user's CPU supports these, as some old processors don't.
3484*/
3485static inline uint32_t VmaCountBitsSet(uint32_t v)
3486{
3487#if VMA_CPP20
3488 return std::popcount(v);
3489#else
3490 uint32_t c = v - ((v >> 1) & 0x55555555);
3491 c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
3492 c = ((c >> 4) + c) & 0x0F0F0F0F;
3493 c = ((c >> 8) + c) & 0x00FF00FF;
3494 c = ((c >> 16) + c) & 0x0000FFFF;
3495 return c;
3496#endif
3497}
3498
3499static inline uint8_t VmaBitScanLSB(uint64_t mask)
3500{
3501#if defined(_MSC_VER) && defined(_WIN64)
3502 unsigned long pos;
3503 if (_BitScanForward64(&pos, mask))
3504 return static_cast<uint8_t>(pos);
3505 return UINT8_MAX;
3506#elif VMA_CPP20
3507 if(mask)
3508 return static_cast<uint8_t>(std::countr_zero(mask));
3509 return UINT8_MAX;
3510#elif defined __GNUC__ || defined __clang__
3511 return static_cast<uint8_t>(__builtin_ffsll(mask)) - 1U;
3512#else
3513 uint8_t pos = 0;
3514 uint64_t bit = 1;
3515 do
3516 {
3517 if (mask & bit)
3518 return pos;
3519 bit <<= 1;
3520 } while (pos++ < 63);
3521 return UINT8_MAX;
3522#endif
3523}
3524
3525static inline uint8_t VmaBitScanLSB(uint32_t mask)
3526{
3527#ifdef _MSC_VER
3528 unsigned long pos;
3529 if (_BitScanForward(&pos, mask))
3530 return static_cast<uint8_t>(pos);
3531 return UINT8_MAX;
3532#elif VMA_CPP20
3533 if(mask)
3534 return static_cast<uint8_t>(std::countr_zero(mask));
3535 return UINT8_MAX;
3536#elif defined __GNUC__ || defined __clang__
3537 return static_cast<uint8_t>(__builtin_ffs(mask)) - 1U;
3538#else
3539 uint8_t pos = 0;
3540 uint32_t bit = 1;
3541 do
3542 {
3543 if (mask & bit)
3544 return pos;
3545 bit <<= 1;
3546 } while (pos++ < 31);
3547 return UINT8_MAX;
3548#endif
3549}
3550
3551static inline uint8_t VmaBitScanMSB(uint64_t mask)
3552{
3553#if defined(_MSC_VER) && defined(_WIN64)
3554 unsigned long pos;
3555 if (_BitScanReverse64(&pos, mask))
3556 return static_cast<uint8_t>(pos);
3557#elif VMA_CPP20
3558 if(mask)
3559 return 63 - static_cast<uint8_t>(std::countl_zero(mask));
3560#elif defined __GNUC__ || defined __clang__
3561 if (mask)
3562 return 63 - static_cast<uint8_t>(__builtin_clzll(mask));
3563#else
3564 uint8_t pos = 63;
3565 uint64_t bit = 1ULL << 63;
3566 do
3567 {
3568 if (mask & bit)
3569 return pos;
3570 bit >>= 1;
3571 } while (pos-- > 0);
3572#endif
3573 return UINT8_MAX;
3574}
3575
3576static inline uint8_t VmaBitScanMSB(uint32_t mask)
3577{
3578#ifdef _MSC_VER
3579 unsigned long pos;
3580 if (_BitScanReverse(&pos, mask))
3581 return static_cast<uint8_t>(pos);
3582#elif VMA_CPP20
3583 if(mask)
3584 return 31 - static_cast<uint8_t>(std::countl_zero(mask));
3585#elif defined __GNUC__ || defined __clang__
3586 if (mask)
3587 return 31 - static_cast<uint8_t>(__builtin_clz(mask));
3588#else
3589 uint8_t pos = 31;
3590 uint32_t bit = 1UL << 31;
3591 do
3592 {
3593 if (mask & bit)
3594 return pos;
3595 bit >>= 1;
3596 } while (pos-- > 0);
3597#endif
3598 return UINT8_MAX;
3599}
3600
3601/*
3602Returns true if given number is a power of two.
3603T must be unsigned integer number or signed integer but always nonnegative.
3604For 0 returns true.
3605*/
3606template <typename T>
3607inline bool VmaIsPow2(T x)
3608{
3609 return (x & (x - 1)) == 0;
3610}
3611
3612// Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
3613// Use types like uint32_t, uint64_t as T.
3614template <typename T>
3615static inline T VmaAlignUp(T val, T alignment)
3616{
3617 VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
3618 return (val + alignment - 1) & ~(alignment - 1);
3619}
3620
3621// Aligns given value down to nearest multiply of align value. For example: VmaAlignDown(11, 8) = 8.
3622// Use types like uint32_t, uint64_t as T.
3623template <typename T>
3624static inline T VmaAlignDown(T val, T alignment)
3625{
3626 VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
3627 return val & ~(alignment - 1);
3628}
3629
3630// Division with mathematical rounding to nearest number.
3631template <typename T>
3632static inline T VmaRoundDiv(T x, T y)
3633{
3634 return (x + (y / (T)2)) / y;
3635}
3636
3637// Divide by 'y' and round up to nearest integer.
3638template <typename T>
3639static inline T VmaDivideRoundingUp(T x, T y)
3640{
3641 return (x + y - (T)1) / y;
3642}
3643
3644// Returns smallest power of 2 greater or equal to v.
3645static inline uint32_t VmaNextPow2(uint32_t v)
3646{
3647 v--;
3648 v |= v >> 1;
3649 v |= v >> 2;
3650 v |= v >> 4;
3651 v |= v >> 8;
3652 v |= v >> 16;
3653 v++;
3654 return v;
3655}
3656
3657static inline uint64_t VmaNextPow2(uint64_t v)
3658{
3659 v--;
3660 v |= v >> 1;
3661 v |= v >> 2;
3662 v |= v >> 4;
3663 v |= v >> 8;
3664 v |= v >> 16;
3665 v |= v >> 32;
3666 v++;
3667 return v;
3668}
3669
3670// Returns largest power of 2 less or equal to v.
3671static inline uint32_t VmaPrevPow2(uint32_t v)
3672{
3673 v |= v >> 1;
3674 v |= v >> 2;
3675 v |= v >> 4;
3676 v |= v >> 8;
3677 v |= v >> 16;
3678 v = v ^ (v >> 1);
3679 return v;
3680}
3681
3682static inline uint64_t VmaPrevPow2(uint64_t v)
3683{
3684 v |= v >> 1;
3685 v |= v >> 2;
3686 v |= v >> 4;
3687 v |= v >> 8;
3688 v |= v >> 16;
3689 v |= v >> 32;
3690 v = v ^ (v >> 1);
3691 return v;
3692}
3693
3694static inline bool VmaStrIsEmpty(const char* pStr)
3695{
3696 return pStr == VMA_NULL || *pStr == '\0';
3697}
3698
3699/*
3700Returns true if two memory blocks occupy overlapping pages.
3701ResourceA must be in less memory offset than ResourceB.
3702
3703Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
3704chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
3705*/
3706static inline bool VmaBlocksOnSamePage(
3707 VkDeviceSize resourceAOffset,
3708 VkDeviceSize resourceASize,
3709 VkDeviceSize resourceBOffset,
3710 VkDeviceSize pageSize)
3711{
3712 VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
3713 VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
3714 VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
3715 VkDeviceSize resourceBStart = resourceBOffset;
3716 VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
3717 return resourceAEndPage == resourceBStartPage;
3718}
3719
3720/*
3721Returns true if given suballocation types could conflict and must respect
3722VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
3723or linear image and another one is optimal image. If type is unknown, behave
3724conservatively.
3725*/
3726static inline bool VmaIsBufferImageGranularityConflict(
3727 VmaSuballocationType suballocType1,
3728 VmaSuballocationType suballocType2)
3729{
3730 if (suballocType1 > suballocType2)
3731 {
3732 std::swap(a&: suballocType1, b&: suballocType2);
3733 }
3734
3735 switch (suballocType1)
3736 {
3737 case VMA_SUBALLOCATION_TYPE_FREE:
3738 return false;
3739 case VMA_SUBALLOCATION_TYPE_UNKNOWN:
3740 return true;
3741 case VMA_SUBALLOCATION_TYPE_BUFFER:
3742 return
3743 suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
3744 suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
3745 case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
3746 return
3747 suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
3748 suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
3749 suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
3750 case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
3751 return
3752 suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
3753 case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
3754 return false;
3755 default:
3756 VMA_ASSERT(0);
3757 return true;
3758 }
3759}
3760
3761static void VmaWriteMagicValue(void* pData, VkDeviceSize offset)
3762{
3763#if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
3764 uint32_t* pDst = (uint32_t*)((char*)pData + offset);
3765 const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
3766 for (size_t i = 0; i < numberCount; ++i, ++pDst)
3767 {
3768 *pDst = VMA_CORRUPTION_DETECTION_MAGIC_VALUE;
3769 }
3770#else
3771 // no-op
3772#endif
3773}
3774
3775static bool VmaValidateMagicValue(const void* pData, VkDeviceSize offset)
3776{
3777#if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
3778 const uint32_t* pSrc = (const uint32_t*)((const char*)pData + offset);
3779 const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
3780 for (size_t i = 0; i < numberCount; ++i, ++pSrc)
3781 {
3782 if (*pSrc != VMA_CORRUPTION_DETECTION_MAGIC_VALUE)
3783 {
3784 return false;
3785 }
3786 }
3787#endif
3788 return true;
3789}
3790
3791/*
3792Fills structure with parameters of an example buffer to be used for transfers
3793during GPU memory defragmentation.
3794*/
3795static void VmaFillGpuDefragmentationBufferCreateInfo(VkBufferCreateInfo& outBufCreateInfo)
3796{
3797 memset(s: &outBufCreateInfo, c: 0, n: sizeof(outBufCreateInfo));
3798 outBufCreateInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
3799 outBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
3800 outBufCreateInfo.size = (VkDeviceSize)VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE; // Example size.
3801}
3802
3803
3804/*
3805Performs binary search and returns iterator to first element that is greater or
3806equal to (key), according to comparison (cmp).
3807
3808Cmp should return true if first argument is less than second argument.
3809
3810Returned value is the found element, if present in the collection or place where
3811new element with value (key) should be inserted.
3812*/
3813template <typename CmpLess, typename IterT, typename KeyT>
3814static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT& key, const CmpLess& cmp)
3815{
3816 size_t down = 0, up = size_t(end - beg);
3817 while (down < up)
3818 {
3819 const size_t mid = down + (up - down) / 2; // Overflow-safe midpoint calculation
3820 if (cmp(*(beg + mid), key))
3821 {
3822 down = mid + 1;
3823 }
3824 else
3825 {
3826 up = mid;
3827 }
3828 }
3829 return beg + down;
3830}
3831
3832template<typename CmpLess, typename IterT, typename KeyT>
3833IterT VmaBinaryFindSorted(const IterT& beg, const IterT& end, const KeyT& value, const CmpLess& cmp)
3834{
3835 IterT it = VmaBinaryFindFirstNotLess<CmpLess, IterT, KeyT>(
3836 beg, end, value, cmp);
3837 if (it == end ||
3838 (!cmp(*it, value) && !cmp(value, *it)))
3839 {
3840 return it;
3841 }
3842 return end;
3843}
3844
3845/*
3846Returns true if all pointers in the array are not-null and unique.
3847Warning! O(n^2) complexity. Use only inside VMA_HEAVY_ASSERT.
3848T must be pointer type, e.g. VmaAllocation, VmaPool.
3849*/
3850template<typename T>
3851static bool VmaValidatePointerArray(uint32_t count, const T* arr)
3852{
3853 for (uint32_t i = 0; i < count; ++i)
3854 {
3855 const T iPtr = arr[i];
3856 if (iPtr == VMA_NULL)
3857 {
3858 return false;
3859 }
3860 for (uint32_t j = i + 1; j < count; ++j)
3861 {
3862 if (iPtr == arr[j])
3863 {
3864 return false;
3865 }
3866 }
3867 }
3868 return true;
3869}
3870
3871template<typename MainT, typename NewT>
3872static inline void VmaPnextChainPushFront(MainT* mainStruct, NewT* newStruct)
3873{
3874 newStruct->pNext = mainStruct->pNext;
3875 mainStruct->pNext = newStruct;
3876}
3877// Finds structure with s->sType == sType in mainStruct->pNext chain.
3878// Returns pointer to it. If not found, returns null.
3879template<typename FindT, typename MainT>
3880static inline const FindT* VmaPnextChainFind(const MainT* mainStruct, VkStructureType sType)
3881{
3882 for(const VkBaseInStructure* s = (const VkBaseInStructure*)mainStruct->pNext;
3883 s != VMA_NULL; s = s->pNext)
3884 {
3885 if(s->sType == sType)
3886 {
3887 return (const FindT*)s;
3888 }
3889 }
3890 return VMA_NULL;
3891}
3892
3893// An abstraction over buffer or image `usage` flags, depending on available extensions.
3894struct VmaBufferImageUsage
3895{
3896#if VMA_KHR_MAINTENANCE5
3897 typedef uint64_t BaseType; // VkFlags64
3898#else
3899 typedef uint32_t BaseType; // VkFlags32
3900#endif
3901
3902 static const VmaBufferImageUsage UNKNOWN;
3903
3904 BaseType Value;
3905
3906 VmaBufferImageUsage() { *this = UNKNOWN; }
3907 explicit VmaBufferImageUsage(BaseType usage) : Value(usage) { }
3908 VmaBufferImageUsage(const VkBufferCreateInfo &createInfo, bool useKhrMaintenance5);
3909 explicit VmaBufferImageUsage(const VkImageCreateInfo &createInfo);
3910
3911 bool operator==(const VmaBufferImageUsage& rhs) const { return Value == rhs.Value; }
3912 bool operator!=(const VmaBufferImageUsage& rhs) const { return Value != rhs.Value; }
3913
3914 bool Contains(BaseType flag) const { return (Value & flag) != 0; }
3915 bool ContainsDeviceAccess() const
3916 {
3917 // This relies on values of VK_IMAGE_USAGE_TRANSFER* being the same as VK_BUFFER_IMAGE_TRANSFER*.
3918 return (Value & ~BaseType(VK_BUFFER_USAGE_TRANSFER_DST_BIT | VK_BUFFER_USAGE_TRANSFER_SRC_BIT)) != 0;
3919 }
3920};
3921
3922const VmaBufferImageUsage VmaBufferImageUsage::UNKNOWN = VmaBufferImageUsage(0);
3923
3924VmaBufferImageUsage::VmaBufferImageUsage(const VkBufferCreateInfo &createInfo,
3925 bool useKhrMaintenance5)
3926{
3927#if VMA_KHR_MAINTENANCE5
3928 if(useKhrMaintenance5)
3929 {
3930 // If VkBufferCreateInfo::pNext chain contains VkBufferUsageFlags2CreateInfoKHR,
3931 // take usage from it and ignore VkBufferCreateInfo::usage, per specification
3932 // of the VK_KHR_maintenance5 extension.
3933 const VkBufferUsageFlags2CreateInfoKHR* const usageFlags2 =
3934 VmaPnextChainFind<VkBufferUsageFlags2CreateInfoKHR>(mainStruct: &createInfo, sType: VK_STRUCTURE_TYPE_BUFFER_USAGE_FLAGS_2_CREATE_INFO_KHR);
3935 if(usageFlags2)
3936 {
3937 this->Value = usageFlags2->usage;
3938 return;
3939 }
3940 }
3941#endif
3942
3943 this->Value = (BaseType)createInfo.usage;
3944}
3945
3946VmaBufferImageUsage::VmaBufferImageUsage(const VkImageCreateInfo &createInfo)
3947{
3948 // Maybe in the future there will be VK_KHR_maintenanceN extension with structure
3949 // VkImageUsageFlags2CreateInfoKHR, like the one for buffers...
3950
3951 this->Value = (BaseType)createInfo.usage;
3952}
3953
3954// This is the main algorithm that guides the selection of a memory type best for an allocation -
3955// converts usage to required/preferred/not preferred flags.
3956static bool FindMemoryPreferences(
3957 bool isIntegratedGPU,
3958 const VmaAllocationCreateInfo& allocCreateInfo,
3959 VmaBufferImageUsage bufImgUsage,
3960 VkMemoryPropertyFlags& outRequiredFlags,
3961 VkMemoryPropertyFlags& outPreferredFlags,
3962 VkMemoryPropertyFlags& outNotPreferredFlags)
3963{
3964 outRequiredFlags = allocCreateInfo.requiredFlags;
3965 outPreferredFlags = allocCreateInfo.preferredFlags;
3966 outNotPreferredFlags = 0;
3967
3968 switch(allocCreateInfo.usage)
3969 {
3970 case VMA_MEMORY_USAGE_UNKNOWN:
3971 break;
3972 case VMA_MEMORY_USAGE_GPU_ONLY:
3973 if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
3974 {
3975 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3976 }
3977 break;
3978 case VMA_MEMORY_USAGE_CPU_ONLY:
3979 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
3980 break;
3981 case VMA_MEMORY_USAGE_CPU_TO_GPU:
3982 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
3983 if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
3984 {
3985 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3986 }
3987 break;
3988 case VMA_MEMORY_USAGE_GPU_TO_CPU:
3989 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
3990 outPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
3991 break;
3992 case VMA_MEMORY_USAGE_CPU_COPY:
3993 outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3994 break;
3995 case VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED:
3996 outRequiredFlags |= VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT;
3997 break;
3998 case VMA_MEMORY_USAGE_AUTO:
3999 case VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE:
4000 case VMA_MEMORY_USAGE_AUTO_PREFER_HOST:
4001 {
4002 if(bufImgUsage == VmaBufferImageUsage::UNKNOWN)
4003 {
4004 VMA_ASSERT(0 && "VMA_MEMORY_USAGE_AUTO* values can only be used with functions like vmaCreateBuffer, vmaCreateImage so that the details of the created resource are known."
4005 " Maybe you use VkBufferUsageFlags2CreateInfoKHR but forgot to use VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT?" );
4006 return false;
4007 }
4008
4009 const bool deviceAccess = bufImgUsage.ContainsDeviceAccess();
4010 const bool hostAccessSequentialWrite = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT) != 0;
4011 const bool hostAccessRandom = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) != 0;
4012 const bool hostAccessAllowTransferInstead = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) != 0;
4013 const bool preferDevice = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE;
4014 const bool preferHost = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST;
4015
4016 // CPU random access - e.g. a buffer written to or transferred from GPU to read back on CPU.
4017 if(hostAccessRandom)
4018 {
4019 // Prefer cached. Cannot require it, because some platforms don't have it (e.g. Raspberry Pi - see #362)!
4020 outPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
4021
4022 if (!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
4023 {
4024 // Nice if it will end up in HOST_VISIBLE, but more importantly prefer DEVICE_LOCAL.
4025 // Omitting HOST_VISIBLE here is intentional.
4026 // In case there is DEVICE_LOCAL | HOST_VISIBLE | HOST_CACHED, it will pick that one.
4027 // Otherwise, this will give same weight to DEVICE_LOCAL as HOST_VISIBLE | HOST_CACHED and select the former if occurs first on the list.
4028 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
4029 }
4030 else
4031 {
4032 // Always CPU memory.
4033 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
4034 }
4035 }
4036 // CPU sequential write - may be CPU or host-visible GPU memory, uncached and write-combined.
4037 else if(hostAccessSequentialWrite)
4038 {
4039 // Want uncached and write-combined.
4040 outNotPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
4041
4042 if(!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
4043 {
4044 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
4045 }
4046 else
4047 {
4048 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
4049 // Direct GPU access, CPU sequential write (e.g. a dynamic uniform buffer updated every frame)
4050 if(deviceAccess)
4051 {
4052 // Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose GPU memory.
4053 if(preferHost)
4054 outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
4055 else
4056 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
4057 }
4058 // GPU no direct access, CPU sequential write (e.g. an upload buffer to be transferred to the GPU)
4059 else
4060 {
4061 // Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose CPU memory.
4062 if(preferDevice)
4063 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
4064 else
4065 outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
4066 }
4067 }
4068 }
4069 // No CPU access
4070 else
4071 {
4072 // if(deviceAccess)
4073 //
4074 // GPU access, no CPU access (e.g. a color attachment image) - prefer GPU memory,
4075 // unless there is a clear preference from the user not to do so.
4076 //
4077 // else:
4078 //
4079 // No direct GPU access, no CPU access, just transfers.
4080 // It may be staging copy intended for e.g. preserving image for next frame (then better GPU memory) or
4081 // a "swap file" copy to free some GPU memory (then better CPU memory).
4082 // Up to the user to decide. If no preferece, assume the former and choose GPU memory.
4083
4084 if(preferHost)
4085 outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
4086 else
4087 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
4088 }
4089 break;
4090 }
4091 default:
4092 VMA_ASSERT(0);
4093 }
4094
4095 // Avoid DEVICE_COHERENT unless explicitly requested.
4096 if(((allocCreateInfo.requiredFlags | allocCreateInfo.preferredFlags) &
4097 (VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)) == 0)
4098 {
4099 outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY;
4100 }
4101
4102 return true;
4103}
4104
4105////////////////////////////////////////////////////////////////////////////////
4106// Memory allocation
4107
4108static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
4109{
4110 void* result = VMA_NULL;
4111 if ((pAllocationCallbacks != VMA_NULL) &&
4112 (pAllocationCallbacks->pfnAllocation != VMA_NULL))
4113 {
4114 result = (*pAllocationCallbacks->pfnAllocation)(
4115 pAllocationCallbacks->pUserData,
4116 size,
4117 alignment,
4118 VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
4119 }
4120 else
4121 {
4122 result = VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
4123 }
4124 VMA_ASSERT(result != VMA_NULL && "CPU memory allocation failed.");
4125 return result;
4126}
4127
4128static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
4129{
4130 if ((pAllocationCallbacks != VMA_NULL) &&
4131 (pAllocationCallbacks->pfnFree != VMA_NULL))
4132 {
4133 (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
4134 }
4135 else
4136 {
4137 VMA_SYSTEM_ALIGNED_FREE(ptr);
4138 }
4139}
4140
4141template<typename T>
4142static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
4143{
4144 return (T*)VmaMalloc(pAllocationCallbacks, size: sizeof(T), VMA_ALIGN_OF(T));
4145}
4146
4147template<typename T>
4148static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
4149{
4150 return (T*)VmaMalloc(pAllocationCallbacks, size: sizeof(T) * count, VMA_ALIGN_OF(T));
4151}
4152
4153#define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
4154
4155#define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
4156
4157template<typename T>
4158static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
4159{
4160 ptr->~T();
4161 VmaFree(pAllocationCallbacks, ptr);
4162}
4163
4164template<typename T>
4165static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
4166{
4167 if (ptr != VMA_NULL)
4168 {
4169 for (size_t i = count; i--; )
4170 {
4171 ptr[i].~T();
4172 }
4173 VmaFree(pAllocationCallbacks, ptr);
4174 }
4175}
4176
4177static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr)
4178{
4179 if (srcStr != VMA_NULL)
4180 {
4181 const size_t len = strlen(s: srcStr);
4182 char* const result = vma_new_array(allocs, char, len + 1);
4183 memcpy(dest: result, src: srcStr, n: len + 1);
4184 return result;
4185 }
4186 return VMA_NULL;
4187}
4188
4189#if VMA_STATS_STRING_ENABLED
4190static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr, size_t strLen)
4191{
4192 if (srcStr != VMA_NULL)
4193 {
4194 char* const result = vma_new_array(allocs, char, strLen + 1);
4195 memcpy(dest: result, src: srcStr, n: strLen);
4196 result[strLen] = '\0';
4197 return result;
4198 }
4199 return VMA_NULL;
4200}
4201#endif // VMA_STATS_STRING_ENABLED
4202
4203static void VmaFreeString(const VkAllocationCallbacks* allocs, char* str)
4204{
4205 if (str != VMA_NULL)
4206 {
4207 const size_t len = strlen(s: str);
4208 vma_delete_array(pAllocationCallbacks: allocs, ptr: str, count: len + 1);
4209 }
4210}
4211
4212template<typename CmpLess, typename VectorT>
4213size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
4214{
4215 const size_t indexToInsert = VmaBinaryFindFirstNotLess(
4216 vector.data(),
4217 vector.data() + vector.size(),
4218 value,
4219 CmpLess()) - vector.data();
4220 VmaVectorInsert(vector, indexToInsert, value);
4221 return indexToInsert;
4222}
4223
4224template<typename CmpLess, typename VectorT>
4225bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
4226{
4227 CmpLess comparator;
4228 typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
4229 vector.begin(),
4230 vector.end(),
4231 value,
4232 comparator);
4233 if ((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
4234 {
4235 size_t indexToRemove = it - vector.begin();
4236 VmaVectorRemove(vector, indexToRemove);
4237 return true;
4238 }
4239 return false;
4240}
4241#endif // _VMA_FUNCTIONS
4242
4243#ifndef _VMA_STATISTICS_FUNCTIONS
4244
4245static void VmaClearStatistics(VmaStatistics& outStats)
4246{
4247 outStats.blockCount = 0;
4248 outStats.allocationCount = 0;
4249 outStats.blockBytes = 0;
4250 outStats.allocationBytes = 0;
4251}
4252
4253static void VmaAddStatistics(VmaStatistics& inoutStats, const VmaStatistics& src)
4254{
4255 inoutStats.blockCount += src.blockCount;
4256 inoutStats.allocationCount += src.allocationCount;
4257 inoutStats.blockBytes += src.blockBytes;
4258 inoutStats.allocationBytes += src.allocationBytes;
4259}
4260
4261static void VmaClearDetailedStatistics(VmaDetailedStatistics& outStats)
4262{
4263 VmaClearStatistics(outStats&: outStats.statistics);
4264 outStats.unusedRangeCount = 0;
4265 outStats.allocationSizeMin = VK_WHOLE_SIZE;
4266 outStats.allocationSizeMax = 0;
4267 outStats.unusedRangeSizeMin = VK_WHOLE_SIZE;
4268 outStats.unusedRangeSizeMax = 0;
4269}
4270
4271static void VmaAddDetailedStatisticsAllocation(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
4272{
4273 inoutStats.statistics.allocationCount++;
4274 inoutStats.statistics.allocationBytes += size;
4275 inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, size);
4276 inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, size);
4277}
4278
4279static void VmaAddDetailedStatisticsUnusedRange(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
4280{
4281 inoutStats.unusedRangeCount++;
4282 inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, size);
4283 inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, size);
4284}
4285
4286static void VmaAddDetailedStatistics(VmaDetailedStatistics& inoutStats, const VmaDetailedStatistics& src)
4287{
4288 VmaAddStatistics(inoutStats&: inoutStats.statistics, src: src.statistics);
4289 inoutStats.unusedRangeCount += src.unusedRangeCount;
4290 inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, src.allocationSizeMin);
4291 inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, src.allocationSizeMax);
4292 inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, src.unusedRangeSizeMin);
4293 inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, src.unusedRangeSizeMax);
4294}
4295
4296#endif // _VMA_STATISTICS_FUNCTIONS
4297
4298#ifndef _VMA_MUTEX_LOCK
4299// Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
4300struct VmaMutexLock
4301{
4302 VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLock)
4303public:
4304 VmaMutexLock(VMA_MUTEX& mutex, bool useMutex = true) :
4305 m_pMutex(useMutex ? &mutex : VMA_NULL)
4306 {
4307 if (m_pMutex) { m_pMutex->Lock(); }
4308 }
4309 ~VmaMutexLock() { if (m_pMutex) { m_pMutex->Unlock(); } }
4310
4311private:
4312 VMA_MUTEX* m_pMutex;
4313};
4314
4315// Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for reading.
4316struct VmaMutexLockRead
4317{
4318 VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLockRead)
4319public:
4320 VmaMutexLockRead(VMA_RW_MUTEX& mutex, bool useMutex) :
4321 m_pMutex(useMutex ? &mutex : VMA_NULL)
4322 {
4323 if (m_pMutex) { m_pMutex->LockRead(); }
4324 }
4325 ~VmaMutexLockRead() { if (m_pMutex) { m_pMutex->UnlockRead(); } }
4326
4327private:
4328 VMA_RW_MUTEX* m_pMutex;
4329};
4330
4331// Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for writing.
4332struct VmaMutexLockWrite
4333{
4334 VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLockWrite)
4335public:
4336 VmaMutexLockWrite(VMA_RW_MUTEX& mutex, bool useMutex)
4337 : m_pMutex(useMutex ? &mutex : VMA_NULL)
4338 {
4339 if (m_pMutex) { m_pMutex->LockWrite(); }
4340 }
4341 ~VmaMutexLockWrite() { if (m_pMutex) { m_pMutex->UnlockWrite(); } }
4342
4343private:
4344 VMA_RW_MUTEX* m_pMutex;
4345};
4346
4347#if VMA_DEBUG_GLOBAL_MUTEX
4348 static VMA_MUTEX gDebugGlobalMutex;
4349 #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
4350#else
4351 #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
4352#endif
4353#endif // _VMA_MUTEX_LOCK
4354
4355#ifndef _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
4356// An object that increments given atomic but decrements it back in the destructor unless Commit() is called.
4357template<typename AtomicT>
4358struct AtomicTransactionalIncrement
4359{
4360public:
4361 using T = decltype(AtomicT().load());
4362
4363 ~AtomicTransactionalIncrement()
4364 {
4365 if(m_Atomic)
4366 --(*m_Atomic);
4367 }
4368
4369 void Commit() { m_Atomic = VMA_NULL; }
4370 T Increment(AtomicT* atomic)
4371 {
4372 m_Atomic = atomic;
4373 return m_Atomic->fetch_add(1);
4374 }
4375
4376private:
4377 AtomicT* m_Atomic = VMA_NULL;
4378};
4379#endif // _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
4380
4381#ifndef _VMA_STL_ALLOCATOR
4382// STL-compatible allocator.
4383template<typename T>
4384struct VmaStlAllocator
4385{
4386 const VkAllocationCallbacks* const m_pCallbacks;
4387 typedef T value_type;
4388
4389 VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) {}
4390 template<typename U>
4391 VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) {}
4392 VmaStlAllocator(const VmaStlAllocator&) = default;
4393 VmaStlAllocator& operator=(const VmaStlAllocator&) = delete;
4394
4395 T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
4396 void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
4397
4398 template<typename U>
4399 bool operator==(const VmaStlAllocator<U>& rhs) const
4400 {
4401 return m_pCallbacks == rhs.m_pCallbacks;
4402 }
4403 template<typename U>
4404 bool operator!=(const VmaStlAllocator<U>& rhs) const
4405 {
4406 return m_pCallbacks != rhs.m_pCallbacks;
4407 }
4408};
4409#endif // _VMA_STL_ALLOCATOR
4410
4411#ifndef _VMA_VECTOR
4412/* Class with interface compatible with subset of std::vector.
4413T must be POD because constructors and destructors are not called and memcpy is
4414used for these objects. */
4415template<typename T, typename AllocatorT>
4416class VmaVector
4417{
4418public:
4419 typedef T value_type;
4420 typedef T* iterator;
4421 typedef const T* const_iterator;
4422
4423 VmaVector(const AllocatorT& allocator);
4424 VmaVector(size_t count, const AllocatorT& allocator);
4425 // This version of the constructor is here for compatibility with pre-C++14 std::vector.
4426 // value is unused.
4427 VmaVector(size_t count, const T& value, const AllocatorT& allocator) : VmaVector(count, allocator) {}
4428 VmaVector(const VmaVector<T, AllocatorT>& src);
4429 VmaVector& operator=(const VmaVector& rhs);
4430 ~VmaVector() { VmaFree(m_Allocator.m_pCallbacks, m_pArray); }
4431
4432 bool empty() const { return m_Count == 0; }
4433 size_t size() const { return m_Count; }
4434 T* data() { return m_pArray; }
4435 T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
4436 T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
4437 const T* data() const { return m_pArray; }
4438 const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
4439 const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
4440
4441 iterator begin() { return m_pArray; }
4442 iterator end() { return m_pArray + m_Count; }
4443 const_iterator cbegin() const { return m_pArray; }
4444 const_iterator cend() const { return m_pArray + m_Count; }
4445 const_iterator begin() const { return cbegin(); }
4446 const_iterator end() const { return cend(); }
4447
4448 void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(index: 0); }
4449 void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(newCount: size() - 1); }
4450 void push_front(const T& src) { insert(index: 0, src); }
4451
4452 void push_back(const T& src);
4453 void reserve(size_t newCapacity, bool freeMemory = false);
4454 void resize(size_t newCount);
4455 void clear() { resize(newCount: 0); }
4456 void shrink_to_fit();
4457 void insert(size_t index, const T& src);
4458 void remove(size_t index);
4459
4460 T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
4461 const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
4462
4463private:
4464 AllocatorT m_Allocator;
4465 T* m_pArray;
4466 size_t m_Count;
4467 size_t m_Capacity;
4468};
4469
4470#ifndef _VMA_VECTOR_FUNCTIONS
4471template<typename T, typename AllocatorT>
4472VmaVector<T, AllocatorT>::VmaVector(const AllocatorT& allocator)
4473 : m_Allocator(allocator),
4474 m_pArray(VMA_NULL),
4475 m_Count(0),
4476 m_Capacity(0) {}
4477
4478template<typename T, typename AllocatorT>
4479VmaVector<T, AllocatorT>::VmaVector(size_t count, const AllocatorT& allocator)
4480 : m_Allocator(allocator),
4481 m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
4482 m_Count(count),
4483 m_Capacity(count) {}
4484
4485template<typename T, typename AllocatorT>
4486VmaVector<T, AllocatorT>::VmaVector(const VmaVector& src)
4487 : m_Allocator(src.m_Allocator),
4488 m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
4489 m_Count(src.m_Count),
4490 m_Capacity(src.m_Count)
4491{
4492 if (m_Count != 0)
4493 {
4494 memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
4495 }
4496}
4497
4498template<typename T, typename AllocatorT>
4499VmaVector<T, AllocatorT>& VmaVector<T, AllocatorT>::operator=(const VmaVector& rhs)
4500{
4501 if (&rhs != this)
4502 {
4503 resize(newCount: rhs.m_Count);
4504 if (m_Count != 0)
4505 {
4506 memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
4507 }
4508 }
4509 return *this;
4510}
4511
4512template<typename T, typename AllocatorT>
4513void VmaVector<T, AllocatorT>::push_back(const T& src)
4514{
4515 const size_t newIndex = size();
4516 resize(newCount: newIndex + 1);
4517 m_pArray[newIndex] = src;
4518}
4519
4520template<typename T, typename AllocatorT>
4521void VmaVector<T, AllocatorT>::reserve(size_t newCapacity, bool freeMemory)
4522{
4523 newCapacity = VMA_MAX(newCapacity, m_Count);
4524
4525 if ((newCapacity < m_Capacity) && !freeMemory)
4526 {
4527 newCapacity = m_Capacity;
4528 }
4529
4530 if (newCapacity != m_Capacity)
4531 {
4532 T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
4533 if (m_Count != 0)
4534 {
4535 memcpy(newArray, m_pArray, m_Count * sizeof(T));
4536 }
4537 VmaFree(m_Allocator.m_pCallbacks, m_pArray);
4538 m_Capacity = newCapacity;
4539 m_pArray = newArray;
4540 }
4541}
4542
4543template<typename T, typename AllocatorT>
4544void VmaVector<T, AllocatorT>::resize(size_t newCount)
4545{
4546 size_t newCapacity = m_Capacity;
4547 if (newCount > m_Capacity)
4548 {
4549 newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
4550 }
4551
4552 if (newCapacity != m_Capacity)
4553 {
4554 T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
4555 const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
4556 if (elementsToCopy != 0)
4557 {
4558 memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
4559 }
4560 VmaFree(m_Allocator.m_pCallbacks, m_pArray);
4561 m_Capacity = newCapacity;
4562 m_pArray = newArray;
4563 }
4564
4565 m_Count = newCount;
4566}
4567
4568template<typename T, typename AllocatorT>
4569void VmaVector<T, AllocatorT>::shrink_to_fit()
4570{
4571 if (m_Capacity > m_Count)
4572 {
4573 T* newArray = VMA_NULL;
4574 if (m_Count > 0)
4575 {
4576 newArray = VmaAllocateArray<T>(m_Allocator.m_pCallbacks, m_Count);
4577 memcpy(newArray, m_pArray, m_Count * sizeof(T));
4578 }
4579 VmaFree(m_Allocator.m_pCallbacks, m_pArray);
4580 m_Capacity = m_Count;
4581 m_pArray = newArray;
4582 }
4583}
4584
4585template<typename T, typename AllocatorT>
4586void VmaVector<T, AllocatorT>::insert(size_t index, const T& src)
4587{
4588 VMA_HEAVY_ASSERT(index <= m_Count);
4589 const size_t oldCount = size();
4590 resize(newCount: oldCount + 1);
4591 if (index < oldCount)
4592 {
4593 memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
4594 }
4595 m_pArray[index] = src;
4596}
4597
4598template<typename T, typename AllocatorT>
4599void VmaVector<T, AllocatorT>::remove(size_t index)
4600{
4601 VMA_HEAVY_ASSERT(index < m_Count);
4602 const size_t oldCount = size();
4603 if (index < oldCount - 1)
4604 {
4605 memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
4606 }
4607 resize(newCount: oldCount - 1);
4608}
4609#endif // _VMA_VECTOR_FUNCTIONS
4610
4611template<typename T, typename allocatorT>
4612static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
4613{
4614 vec.insert(index, item);
4615}
4616
4617template<typename T, typename allocatorT>
4618static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
4619{
4620 vec.remove(index);
4621}
4622#endif // _VMA_VECTOR
4623
4624#ifndef _VMA_SMALL_VECTOR
4625/*
4626This is a vector (a variable-sized array), optimized for the case when the array is small.
4627
4628It contains some number of elements in-place, which allows it to avoid heap allocation
4629when the actual number of elements is below that threshold. This allows normal "small"
4630cases to be fast without losing generality for large inputs.
4631*/
4632template<typename T, typename AllocatorT, size_t N>
4633class VmaSmallVector
4634{
4635public:
4636 typedef T value_type;
4637 typedef T* iterator;
4638
4639 VmaSmallVector(const AllocatorT& allocator);
4640 VmaSmallVector(size_t count, const AllocatorT& allocator);
4641 template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
4642 VmaSmallVector(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
4643 template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
4644 VmaSmallVector<T, AllocatorT, N>& operator=(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
4645 ~VmaSmallVector() = default;
4646
4647 bool empty() const { return m_Count == 0; }
4648 size_t size() const { return m_Count; }
4649 T* data() { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
4650 T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
4651 T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
4652 const T* data() const { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
4653 const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
4654 const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
4655
4656 iterator begin() { return data(); }
4657 iterator end() { return data() + m_Count; }
4658
4659 void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(index: 0); }
4660 void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(newCount: size() - 1); }
4661 void push_front(const T& src) { insert(index: 0, src); }
4662
4663 void push_back(const T& src);
4664 void resize(size_t newCount, bool freeMemory = false);
4665 void clear(bool freeMemory = false);
4666 void insert(size_t index, const T& src);
4667 void remove(size_t index);
4668
4669 T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
4670 const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
4671
4672private:
4673 size_t m_Count;
4674 T m_StaticArray[N]; // Used when m_Size <= N
4675 VmaVector<T, AllocatorT> m_DynamicArray; // Used when m_Size > N
4676};
4677
4678#ifndef _VMA_SMALL_VECTOR_FUNCTIONS
4679template<typename T, typename AllocatorT, size_t N>
4680VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(const AllocatorT& allocator)
4681 : m_Count(0),
4682 m_DynamicArray(allocator) {}
4683
4684template<typename T, typename AllocatorT, size_t N>
4685VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(size_t count, const AllocatorT& allocator)
4686 : m_Count(count),
4687 m_DynamicArray(count > N ? count : 0, allocator) {}
4688
4689template<typename T, typename AllocatorT, size_t N>
4690void VmaSmallVector<T, AllocatorT, N>::push_back(const T& src)
4691{
4692 const size_t newIndex = size();
4693 resize(newCount: newIndex + 1);
4694 data()[newIndex] = src;
4695}
4696
4697template<typename T, typename AllocatorT, size_t N>
4698void VmaSmallVector<T, AllocatorT, N>::resize(size_t newCount, bool freeMemory)
4699{
4700 if (newCount > N && m_Count > N)
4701 {
4702 // Any direction, staying in m_DynamicArray
4703 m_DynamicArray.resize(newCount);
4704 if (freeMemory)
4705 {
4706 m_DynamicArray.shrink_to_fit();
4707 }
4708 }
4709 else if (newCount > N && m_Count <= N)
4710 {
4711 // Growing, moving from m_StaticArray to m_DynamicArray
4712 m_DynamicArray.resize(newCount);
4713 if (m_Count > 0)
4714 {
4715 memcpy(m_DynamicArray.data(), m_StaticArray, m_Count * sizeof(T));
4716 }
4717 }
4718 else if (newCount <= N && m_Count > N)
4719 {
4720 // Shrinking, moving from m_DynamicArray to m_StaticArray
4721 if (newCount > 0)
4722 {
4723 memcpy(m_StaticArray, m_DynamicArray.data(), newCount * sizeof(T));
4724 }
4725 m_DynamicArray.resize(0);
4726 if (freeMemory)
4727 {
4728 m_DynamicArray.shrink_to_fit();
4729 }
4730 }
4731 else
4732 {
4733 // Any direction, staying in m_StaticArray - nothing to do here
4734 }
4735 m_Count = newCount;
4736}
4737
4738template<typename T, typename AllocatorT, size_t N>
4739void VmaSmallVector<T, AllocatorT, N>::clear(bool freeMemory)
4740{
4741 m_DynamicArray.clear();
4742 if (freeMemory)
4743 {
4744 m_DynamicArray.shrink_to_fit();
4745 }
4746 m_Count = 0;
4747}
4748
4749template<typename T, typename AllocatorT, size_t N>
4750void VmaSmallVector<T, AllocatorT, N>::insert(size_t index, const T& src)
4751{
4752 VMA_HEAVY_ASSERT(index <= m_Count);
4753 const size_t oldCount = size();
4754 resize(newCount: oldCount + 1);
4755 T* const dataPtr = data();
4756 if (index < oldCount)
4757 {
4758 // I know, this could be more optimal for case where memmove can be memcpy directly from m_StaticArray to m_DynamicArray.
4759 memmove(dataPtr + (index + 1), dataPtr + index, (oldCount - index) * sizeof(T));
4760 }
4761 dataPtr[index] = src;
4762}
4763
4764template<typename T, typename AllocatorT, size_t N>
4765void VmaSmallVector<T, AllocatorT, N>::remove(size_t index)
4766{
4767 VMA_HEAVY_ASSERT(index < m_Count);
4768 const size_t oldCount = size();
4769 if (index < oldCount - 1)
4770 {
4771 // I know, this could be more optimal for case where memmove can be memcpy directly from m_DynamicArray to m_StaticArray.
4772 T* const dataPtr = data();
4773 memmove(dataPtr + index, dataPtr + (index + 1), (oldCount - index - 1) * sizeof(T));
4774 }
4775 resize(newCount: oldCount - 1);
4776}
4777#endif // _VMA_SMALL_VECTOR_FUNCTIONS
4778#endif // _VMA_SMALL_VECTOR
4779
4780#ifndef _VMA_POOL_ALLOCATOR
4781/*
4782Allocator for objects of type T using a list of arrays (pools) to speed up
4783allocation. Number of elements that can be allocated is not bounded because
4784allocator can create multiple blocks.
4785*/
4786template<typename T>
4787class VmaPoolAllocator
4788{
4789 VMA_CLASS_NO_COPY_NO_MOVE(VmaPoolAllocator)
4790public:
4791 VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity);
4792 ~VmaPoolAllocator();
4793 template<typename... Types> T* Alloc(Types&&... args);
4794 void Free(T* ptr);
4795
4796private:
4797 union Item
4798 {
4799 uint32_t NextFreeIndex;
4800 alignas(T) char Value[sizeof(T)];
4801 };
4802 struct ItemBlock
4803 {
4804 Item* pItems;
4805 uint32_t Capacity;
4806 uint32_t FirstFreeIndex;
4807 };
4808
4809 const VkAllocationCallbacks* m_pAllocationCallbacks;
4810 const uint32_t m_FirstBlockCapacity;
4811 VmaVector<ItemBlock, VmaStlAllocator<ItemBlock>> m_ItemBlocks;
4812
4813 ItemBlock& CreateNewBlock();
4814};
4815
4816#ifndef _VMA_POOL_ALLOCATOR_FUNCTIONS
4817template<typename T>
4818VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity)
4819 : m_pAllocationCallbacks(pAllocationCallbacks),
4820 m_FirstBlockCapacity(firstBlockCapacity),
4821 m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
4822{
4823 VMA_ASSERT(m_FirstBlockCapacity > 1);
4824}
4825
4826template<typename T>
4827VmaPoolAllocator<T>::~VmaPoolAllocator()
4828{
4829 for (size_t i = m_ItemBlocks.size(); i--;)
4830 vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemBlocks[i].Capacity);
4831 m_ItemBlocks.clear();
4832}
4833
4834template<typename T>
4835template<typename... Types> T* VmaPoolAllocator<T>::Alloc(Types&&... args)
4836{
4837 for (size_t i = m_ItemBlocks.size(); i--; )
4838 {
4839 ItemBlock& block = m_ItemBlocks[i];
4840 // This block has some free items: Use first one.
4841 if (block.FirstFreeIndex != UINT32_MAX)
4842 {
4843 Item* const pItem = &block.pItems[block.FirstFreeIndex];
4844 block.FirstFreeIndex = pItem->NextFreeIndex;
4845 T* result = (T*)&pItem->Value;
4846 new(result)T(std::forward<Types>(args)...); // Explicit constructor call.
4847 return result;
4848 }
4849 }
4850
4851 // No block has free item: Create new one and use it.
4852 ItemBlock& newBlock = CreateNewBlock();
4853 Item* const pItem = &newBlock.pItems[0];
4854 newBlock.FirstFreeIndex = pItem->NextFreeIndex;
4855 T* result = (T*)&pItem->Value;
4856 new(result) T(std::forward<Types>(args)...); // Explicit constructor call.
4857 return result;
4858}
4859
4860template<typename T>
4861void VmaPoolAllocator<T>::Free(T* ptr)
4862{
4863 // Search all memory blocks to find ptr.
4864 for (size_t i = m_ItemBlocks.size(); i--; )
4865 {
4866 ItemBlock& block = m_ItemBlocks[i];
4867
4868 // Casting to union.
4869 Item* pItemPtr;
4870 memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
4871
4872 // Check if pItemPtr is in address range of this block.
4873 if ((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + block.Capacity))
4874 {
4875 ptr->~T(); // Explicit destructor call.
4876 const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
4877 pItemPtr->NextFreeIndex = block.FirstFreeIndex;
4878 block.FirstFreeIndex = index;
4879 return;
4880 }
4881 }
4882 VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
4883}
4884
4885template<typename T>
4886typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
4887{
4888 const uint32_t newBlockCapacity = m_ItemBlocks.empty() ?
4889 m_FirstBlockCapacity : m_ItemBlocks.back().Capacity * 3 / 2;
4890
4891 const ItemBlock newBlock =
4892 {
4893 vma_new_array(m_pAllocationCallbacks, Item, newBlockCapacity),
4894 newBlockCapacity,
4895 0
4896 };
4897
4898 m_ItemBlocks.push_back(newBlock);
4899
4900 // Setup singly-linked list of all free items in this block.
4901 for (uint32_t i = 0; i < newBlockCapacity - 1; ++i)
4902 newBlock.pItems[i].NextFreeIndex = i + 1;
4903 newBlock.pItems[newBlockCapacity - 1].NextFreeIndex = UINT32_MAX;
4904 return m_ItemBlocks.back();
4905}
4906#endif // _VMA_POOL_ALLOCATOR_FUNCTIONS
4907#endif // _VMA_POOL_ALLOCATOR
4908
4909#ifndef _VMA_RAW_LIST
4910template<typename T>
4911struct VmaListItem
4912{
4913 VmaListItem* pPrev;
4914 VmaListItem* pNext;
4915 T Value;
4916};
4917
4918// Doubly linked list.
4919template<typename T>
4920class VmaRawList
4921{
4922 VMA_CLASS_NO_COPY_NO_MOVE(VmaRawList)
4923public:
4924 typedef VmaListItem<T> ItemType;
4925
4926 VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
4927 // Intentionally not calling Clear, because that would be unnecessary
4928 // computations to return all items to m_ItemAllocator as free.
4929 ~VmaRawList() = default;
4930
4931 size_t GetCount() const { return m_Count; }
4932 bool IsEmpty() const { return m_Count == 0; }
4933
4934 ItemType* Front() { return m_pFront; }
4935 ItemType* Back() { return m_pBack; }
4936 const ItemType* Front() const { return m_pFront; }
4937 const ItemType* Back() const { return m_pBack; }
4938
4939 ItemType* PushFront();
4940 ItemType* PushBack();
4941 ItemType* PushFront(const T& value);
4942 ItemType* PushBack(const T& value);
4943 void PopFront();
4944 void PopBack();
4945
4946 // Item can be null - it means PushBack.
4947 ItemType* InsertBefore(ItemType* pItem);
4948 // Item can be null - it means PushFront.
4949 ItemType* InsertAfter(ItemType* pItem);
4950 ItemType* InsertBefore(ItemType* pItem, const T& value);
4951 ItemType* InsertAfter(ItemType* pItem, const T& value);
4952
4953 void Clear();
4954 void Remove(ItemType* pItem);
4955
4956private:
4957 const VkAllocationCallbacks* const m_pAllocationCallbacks;
4958 VmaPoolAllocator<ItemType> m_ItemAllocator;
4959 ItemType* m_pFront;
4960 ItemType* m_pBack;
4961 size_t m_Count;
4962};
4963
4964#ifndef _VMA_RAW_LIST_FUNCTIONS
4965template<typename T>
4966VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks)
4967 : m_pAllocationCallbacks(pAllocationCallbacks),
4968 m_ItemAllocator(pAllocationCallbacks, 128),
4969 m_pFront(VMA_NULL),
4970 m_pBack(VMA_NULL),
4971 m_Count(0) {}
4972
4973template<typename T>
4974VmaListItem<T>* VmaRawList<T>::PushFront()
4975{
4976 ItemType* const pNewItem = m_ItemAllocator.Alloc();
4977 pNewItem->pPrev = VMA_NULL;
4978 if (IsEmpty())
4979 {
4980 pNewItem->pNext = VMA_NULL;
4981 m_pFront = pNewItem;
4982 m_pBack = pNewItem;
4983 m_Count = 1;
4984 }
4985 else
4986 {
4987 pNewItem->pNext = m_pFront;
4988 m_pFront->pPrev = pNewItem;
4989 m_pFront = pNewItem;
4990 ++m_Count;
4991 }
4992 return pNewItem;
4993}
4994
4995template<typename T>
4996VmaListItem<T>* VmaRawList<T>::PushBack()
4997{
4998 ItemType* const pNewItem = m_ItemAllocator.Alloc();
4999 pNewItem->pNext = VMA_NULL;
5000 if(IsEmpty())
5001 {
5002 pNewItem->pPrev = VMA_NULL;
5003 m_pFront = pNewItem;
5004 m_pBack = pNewItem;
5005 m_Count = 1;
5006 }
5007 else
5008 {
5009 pNewItem->pPrev = m_pBack;
5010 m_pBack->pNext = pNewItem;
5011 m_pBack = pNewItem;
5012 ++m_Count;
5013 }
5014 return pNewItem;
5015}
5016
5017template<typename T>
5018VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
5019{
5020 ItemType* const pNewItem = PushFront();
5021 pNewItem->Value = value;
5022 return pNewItem;
5023}
5024
5025template<typename T>
5026VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
5027{
5028 ItemType* const pNewItem = PushBack();
5029 pNewItem->Value = value;
5030 return pNewItem;
5031}
5032
5033template<typename T>
5034void VmaRawList<T>::PopFront()
5035{
5036 VMA_HEAVY_ASSERT(m_Count > 0);
5037 ItemType* const pFrontItem = m_pFront;
5038 ItemType* const pNextItem = pFrontItem->pNext;
5039 if (pNextItem != VMA_NULL)
5040 {
5041 pNextItem->pPrev = VMA_NULL;
5042 }
5043 m_pFront = pNextItem;
5044 m_ItemAllocator.Free(pFrontItem);
5045 --m_Count;
5046}
5047
5048template<typename T>
5049void VmaRawList<T>::PopBack()
5050{
5051 VMA_HEAVY_ASSERT(m_Count > 0);
5052 ItemType* const pBackItem = m_pBack;
5053 ItemType* const pPrevItem = pBackItem->pPrev;
5054 if(pPrevItem != VMA_NULL)
5055 {
5056 pPrevItem->pNext = VMA_NULL;
5057 }
5058 m_pBack = pPrevItem;
5059 m_ItemAllocator.Free(pBackItem);
5060 --m_Count;
5061}
5062
5063template<typename T>
5064void VmaRawList<T>::Clear()
5065{
5066 if (IsEmpty() == false)
5067 {
5068 ItemType* pItem = m_pBack;
5069 while (pItem != VMA_NULL)
5070 {
5071 ItemType* const pPrevItem = pItem->pPrev;
5072 m_ItemAllocator.Free(pItem);
5073 pItem = pPrevItem;
5074 }
5075 m_pFront = VMA_NULL;
5076 m_pBack = VMA_NULL;
5077 m_Count = 0;
5078 }
5079}
5080
5081template<typename T>
5082void VmaRawList<T>::Remove(ItemType* pItem)
5083{
5084 VMA_HEAVY_ASSERT(pItem != VMA_NULL);
5085 VMA_HEAVY_ASSERT(m_Count > 0);
5086
5087 if(pItem->pPrev != VMA_NULL)
5088 {
5089 pItem->pPrev->pNext = pItem->pNext;
5090 }
5091 else
5092 {
5093 VMA_HEAVY_ASSERT(m_pFront == pItem);
5094 m_pFront = pItem->pNext;
5095 }
5096
5097 if(pItem->pNext != VMA_NULL)
5098 {
5099 pItem->pNext->pPrev = pItem->pPrev;
5100 }
5101 else
5102 {
5103 VMA_HEAVY_ASSERT(m_pBack == pItem);
5104 m_pBack = pItem->pPrev;
5105 }
5106
5107 m_ItemAllocator.Free(pItem);
5108 --m_Count;
5109}
5110
5111template<typename T>
5112VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
5113{
5114 if(pItem != VMA_NULL)
5115 {
5116 ItemType* const prevItem = pItem->pPrev;
5117 ItemType* const newItem = m_ItemAllocator.Alloc();
5118 newItem->pPrev = prevItem;
5119 newItem->pNext = pItem;
5120 pItem->pPrev = newItem;
5121 if(prevItem != VMA_NULL)
5122 {
5123 prevItem->pNext = newItem;
5124 }
5125 else
5126 {
5127 VMA_HEAVY_ASSERT(m_pFront == pItem);
5128 m_pFront = newItem;
5129 }
5130 ++m_Count;
5131 return newItem;
5132 }
5133 else
5134 return PushBack();
5135}
5136
5137template<typename T>
5138VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
5139{
5140 if(pItem != VMA_NULL)
5141 {
5142 ItemType* const nextItem = pItem->pNext;
5143 ItemType* const newItem = m_ItemAllocator.Alloc();
5144 newItem->pNext = nextItem;
5145 newItem->pPrev = pItem;
5146 pItem->pNext = newItem;
5147 if(nextItem != VMA_NULL)
5148 {
5149 nextItem->pPrev = newItem;
5150 }
5151 else
5152 {
5153 VMA_HEAVY_ASSERT(m_pBack == pItem);
5154 m_pBack = newItem;
5155 }
5156 ++m_Count;
5157 return newItem;
5158 }
5159 else
5160 return PushFront();
5161}
5162
5163template<typename T>
5164VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
5165{
5166 ItemType* const newItem = InsertBefore(pItem);
5167 newItem->Value = value;
5168 return newItem;
5169}
5170
5171template<typename T>
5172VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
5173{
5174 ItemType* const newItem = InsertAfter(pItem);
5175 newItem->Value = value;
5176 return newItem;
5177}
5178#endif // _VMA_RAW_LIST_FUNCTIONS
5179#endif // _VMA_RAW_LIST
5180
5181#ifndef _VMA_LIST
5182template<typename T, typename AllocatorT>
5183class VmaList
5184{
5185 VMA_CLASS_NO_COPY_NO_MOVE(VmaList)
5186public:
5187 class reverse_iterator;
5188 class const_iterator;
5189 class const_reverse_iterator;
5190
5191 class iterator
5192 {
5193 friend class const_iterator;
5194 friend class VmaList<T, AllocatorT>;
5195 public:
5196 iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
5197 iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
5198
5199 T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
5200 T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
5201
5202 bool operator==(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
5203 bool operator!=(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
5204
5205 iterator operator++(int) { iterator result = *this; ++*this; return result; }
5206 iterator operator--(int) { iterator result = *this; --*this; return result; }
5207
5208 iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
5209 iterator& operator--();
5210
5211 private:
5212 VmaRawList<T>* m_pList;
5213 VmaListItem<T>* m_pItem;
5214
5215 iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
5216 };
5217 class reverse_iterator
5218 {
5219 friend class const_reverse_iterator;
5220 friend class VmaList<T, AllocatorT>;
5221 public:
5222 reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
5223 reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
5224
5225 T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
5226 T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
5227
5228 bool operator==(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
5229 bool operator!=(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
5230
5231 reverse_iterator operator++(int) { reverse_iterator result = *this; ++* this; return result; }
5232 reverse_iterator operator--(int) { reverse_iterator result = *this; --* this; return result; }
5233
5234 reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
5235 reverse_iterator& operator--();
5236
5237 private:
5238 VmaRawList<T>* m_pList;
5239 VmaListItem<T>* m_pItem;
5240
5241 reverse_iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
5242 };
5243 class const_iterator
5244 {
5245 friend class VmaList<T, AllocatorT>;
5246 public:
5247 const_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
5248 const_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
5249 const_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
5250
5251 iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
5252
5253 const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
5254 const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
5255
5256 bool operator==(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
5257 bool operator!=(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
5258
5259 const_iterator operator++(int) { const_iterator result = *this; ++* this; return result; }
5260 const_iterator operator--(int) { const_iterator result = *this; --* this; return result; }
5261
5262 const_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
5263 const_iterator& operator--();
5264
5265 private:
5266 const VmaRawList<T>* m_pList;
5267 const VmaListItem<T>* m_pItem;
5268
5269 const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
5270 };
5271 class const_reverse_iterator
5272 {
5273 friend class VmaList<T, AllocatorT>;
5274 public:
5275 const_reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
5276 const_reverse_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
5277 const_reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
5278
5279 reverse_iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
5280
5281 const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
5282 const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
5283
5284 bool operator==(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
5285 bool operator!=(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
5286
5287 const_reverse_iterator operator++(int) { const_reverse_iterator result = *this; ++* this; return result; }
5288 const_reverse_iterator operator--(int) { const_reverse_iterator result = *this; --* this; return result; }
5289
5290 const_reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
5291 const_reverse_iterator& operator--();
5292
5293 private:
5294 const VmaRawList<T>* m_pList;
5295 const VmaListItem<T>* m_pItem;
5296
5297 const_reverse_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
5298 };
5299
5300 VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) {}
5301
5302 bool empty() const { return m_RawList.IsEmpty(); }
5303 size_t size() const { return m_RawList.GetCount(); }
5304
5305 iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
5306 iterator end() { return iterator(&m_RawList, VMA_NULL); }
5307
5308 const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
5309 const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
5310
5311 const_iterator begin() const { return cbegin(); }
5312 const_iterator end() const { return cend(); }
5313
5314 reverse_iterator rbegin() { return reverse_iterator(&m_RawList, m_RawList.Back()); }
5315 reverse_iterator rend() { return reverse_iterator(&m_RawList, VMA_NULL); }
5316
5317 const_reverse_iterator crbegin() const { return const_reverse_iterator(&m_RawList, m_RawList.Back()); }
5318 const_reverse_iterator crend() const { return const_reverse_iterator(&m_RawList, VMA_NULL); }
5319
5320 const_reverse_iterator rbegin() const { return crbegin(); }
5321 const_reverse_iterator rend() const { return crend(); }
5322
5323 void push_back(const T& value) { m_RawList.PushBack(value); }
5324 iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
5325
5326 void clear() { m_RawList.Clear(); }
5327 void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
5328
5329private:
5330 VmaRawList<T> m_RawList;
5331};
5332
5333#ifndef _VMA_LIST_FUNCTIONS
5334template<typename T, typename AllocatorT>
5335typename VmaList<T, AllocatorT>::iterator& VmaList<T, AllocatorT>::iterator::operator--()
5336{
5337 if (m_pItem != VMA_NULL)
5338 {
5339 m_pItem = m_pItem->pPrev;
5340 }
5341 else
5342 {
5343 VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
5344 m_pItem = m_pList->Back();
5345 }
5346 return *this;
5347}
5348
5349template<typename T, typename AllocatorT>
5350typename VmaList<T, AllocatorT>::reverse_iterator& VmaList<T, AllocatorT>::reverse_iterator::operator--()
5351{
5352 if (m_pItem != VMA_NULL)
5353 {
5354 m_pItem = m_pItem->pNext;
5355 }
5356 else
5357 {
5358 VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
5359 m_pItem = m_pList->Front();
5360 }
5361 return *this;
5362}
5363
5364template<typename T, typename AllocatorT>
5365typename VmaList<T, AllocatorT>::const_iterator& VmaList<T, AllocatorT>::const_iterator::operator--()
5366{
5367 if (m_pItem != VMA_NULL)
5368 {
5369 m_pItem = m_pItem->pPrev;
5370 }
5371 else
5372 {
5373 VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
5374 m_pItem = m_pList->Back();
5375 }
5376 return *this;
5377}
5378
5379template<typename T, typename AllocatorT>
5380typename VmaList<T, AllocatorT>::const_reverse_iterator& VmaList<T, AllocatorT>::const_reverse_iterator::operator--()
5381{
5382 if (m_pItem != VMA_NULL)
5383 {
5384 m_pItem = m_pItem->pNext;
5385 }
5386 else
5387 {
5388 VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
5389 m_pItem = m_pList->Back();
5390 }
5391 return *this;
5392}
5393#endif // _VMA_LIST_FUNCTIONS
5394#endif // _VMA_LIST
5395
5396#ifndef _VMA_INTRUSIVE_LINKED_LIST
5397/*
5398Expected interface of ItemTypeTraits:
5399struct MyItemTypeTraits
5400{
5401 typedef MyItem ItemType;
5402 static ItemType* GetPrev(const ItemType* item) { return item->myPrevPtr; }
5403 static ItemType* GetNext(const ItemType* item) { return item->myNextPtr; }
5404 static ItemType*& AccessPrev(ItemType* item) { return item->myPrevPtr; }
5405 static ItemType*& AccessNext(ItemType* item) { return item->myNextPtr; }
5406};
5407*/
5408template<typename ItemTypeTraits>
5409class VmaIntrusiveLinkedList
5410{
5411public:
5412 typedef typename ItemTypeTraits::ItemType ItemType;
5413 static ItemType* GetPrev(const ItemType* item) { return ItemTypeTraits::GetPrev(item); }
5414 static ItemType* GetNext(const ItemType* item) { return ItemTypeTraits::GetNext(item); }
5415
5416 // Movable, not copyable.
5417 VmaIntrusiveLinkedList() = default;
5418 VmaIntrusiveLinkedList(VmaIntrusiveLinkedList && src);
5419 VmaIntrusiveLinkedList(const VmaIntrusiveLinkedList&) = delete;
5420 VmaIntrusiveLinkedList& operator=(VmaIntrusiveLinkedList&& src);
5421 VmaIntrusiveLinkedList& operator=(const VmaIntrusiveLinkedList&) = delete;
5422 ~VmaIntrusiveLinkedList() { VMA_HEAVY_ASSERT(IsEmpty()); }
5423
5424 size_t GetCount() const { return m_Count; }
5425 bool IsEmpty() const { return m_Count == 0; }
5426 ItemType* Front() { return m_Front; }
5427 ItemType* Back() { return m_Back; }
5428 const ItemType* Front() const { return m_Front; }
5429 const ItemType* Back() const { return m_Back; }
5430
5431 void PushBack(ItemType* item);
5432 void PushFront(ItemType* item);
5433 ItemType* PopBack();
5434 ItemType* PopFront();
5435
5436 // MyItem can be null - it means PushBack.
5437 void InsertBefore(ItemType* existingItem, ItemType* newItem);
5438 // MyItem can be null - it means PushFront.
5439 void InsertAfter(ItemType* existingItem, ItemType* newItem);
5440 void Remove(ItemType* item);
5441 void RemoveAll();
5442
5443private:
5444 ItemType* m_Front = VMA_NULL;
5445 ItemType* m_Back = VMA_NULL;
5446 size_t m_Count = 0;
5447};
5448
5449#ifndef _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
5450template<typename ItemTypeTraits>
5451VmaIntrusiveLinkedList<ItemTypeTraits>::VmaIntrusiveLinkedList(VmaIntrusiveLinkedList&& src)
5452 : m_Front(src.m_Front), m_Back(src.m_Back), m_Count(src.m_Count)
5453{
5454 src.m_Front = src.m_Back = VMA_NULL;
5455 src.m_Count = 0;
5456}
5457
5458template<typename ItemTypeTraits>
5459VmaIntrusiveLinkedList<ItemTypeTraits>& VmaIntrusiveLinkedList<ItemTypeTraits>::operator=(VmaIntrusiveLinkedList&& src)
5460{
5461 if (&src != this)
5462 {
5463 VMA_HEAVY_ASSERT(IsEmpty());
5464 m_Front = src.m_Front;
5465 m_Back = src.m_Back;
5466 m_Count = src.m_Count;
5467 src.m_Front = src.m_Back = VMA_NULL;
5468 src.m_Count = 0;
5469 }
5470 return *this;
5471}
5472
5473template<typename ItemTypeTraits>
5474void VmaIntrusiveLinkedList<ItemTypeTraits>::PushBack(ItemType* item)
5475{
5476 VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
5477 if (IsEmpty())
5478 {
5479 m_Front = item;
5480 m_Back = item;
5481 m_Count = 1;
5482 }
5483 else
5484 {
5485 ItemTypeTraits::AccessPrev(item) = m_Back;
5486 ItemTypeTraits::AccessNext(m_Back) = item;
5487 m_Back = item;
5488 ++m_Count;
5489 }
5490}
5491
5492template<typename ItemTypeTraits>
5493void VmaIntrusiveLinkedList<ItemTypeTraits>::PushFront(ItemType* item)
5494{
5495 VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
5496 if (IsEmpty())
5497 {
5498 m_Front = item;
5499 m_Back = item;
5500 m_Count = 1;
5501 }
5502 else
5503 {
5504 ItemTypeTraits::AccessNext(item) = m_Front;
5505 ItemTypeTraits::AccessPrev(m_Front) = item;
5506 m_Front = item;
5507 ++m_Count;
5508 }
5509}
5510
5511template<typename ItemTypeTraits>
5512typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopBack()
5513{
5514 VMA_HEAVY_ASSERT(m_Count > 0);
5515 ItemType* const backItem = m_Back;
5516 ItemType* const prevItem = ItemTypeTraits::GetPrev(backItem);
5517 if (prevItem != VMA_NULL)
5518 {
5519 ItemTypeTraits::AccessNext(prevItem) = VMA_NULL;
5520 }
5521 m_Back = prevItem;
5522 --m_Count;
5523 ItemTypeTraits::AccessPrev(backItem) = VMA_NULL;
5524 ItemTypeTraits::AccessNext(backItem) = VMA_NULL;
5525 return backItem;
5526}
5527
5528template<typename ItemTypeTraits>
5529typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopFront()
5530{
5531 VMA_HEAVY_ASSERT(m_Count > 0);
5532 ItemType* const frontItem = m_Front;
5533 ItemType* const nextItem = ItemTypeTraits::GetNext(frontItem);
5534 if (nextItem != VMA_NULL)
5535 {
5536 ItemTypeTraits::AccessPrev(nextItem) = VMA_NULL;
5537 }
5538 m_Front = nextItem;
5539 --m_Count;
5540 ItemTypeTraits::AccessPrev(frontItem) = VMA_NULL;
5541 ItemTypeTraits::AccessNext(frontItem) = VMA_NULL;
5542 return frontItem;
5543}
5544
5545template<typename ItemTypeTraits>
5546void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertBefore(ItemType* existingItem, ItemType* newItem)
5547{
5548 VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
5549 if (existingItem != VMA_NULL)
5550 {
5551 ItemType* const prevItem = ItemTypeTraits::GetPrev(existingItem);
5552 ItemTypeTraits::AccessPrev(newItem) = prevItem;
5553 ItemTypeTraits::AccessNext(newItem) = existingItem;
5554 ItemTypeTraits::AccessPrev(existingItem) = newItem;
5555 if (prevItem != VMA_NULL)
5556 {
5557 ItemTypeTraits::AccessNext(prevItem) = newItem;
5558 }
5559 else
5560 {
5561 VMA_HEAVY_ASSERT(m_Front == existingItem);
5562 m_Front = newItem;
5563 }
5564 ++m_Count;
5565 }
5566 else
5567 PushBack(item: newItem);
5568}
5569
5570template<typename ItemTypeTraits>
5571void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertAfter(ItemType* existingItem, ItemType* newItem)
5572{
5573 VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
5574 if (existingItem != VMA_NULL)
5575 {
5576 ItemType* const nextItem = ItemTypeTraits::GetNext(existingItem);
5577 ItemTypeTraits::AccessNext(newItem) = nextItem;
5578 ItemTypeTraits::AccessPrev(newItem) = existingItem;
5579 ItemTypeTraits::AccessNext(existingItem) = newItem;
5580 if (nextItem != VMA_NULL)
5581 {
5582 ItemTypeTraits::AccessPrev(nextItem) = newItem;
5583 }
5584 else
5585 {
5586 VMA_HEAVY_ASSERT(m_Back == existingItem);
5587 m_Back = newItem;
5588 }
5589 ++m_Count;
5590 }
5591 else
5592 return PushFront(item: newItem);
5593}
5594
5595template<typename ItemTypeTraits>
5596void VmaIntrusiveLinkedList<ItemTypeTraits>::Remove(ItemType* item)
5597{
5598 VMA_HEAVY_ASSERT(item != VMA_NULL && m_Count > 0);
5599 if (ItemTypeTraits::GetPrev(item) != VMA_NULL)
5600 {
5601 ItemTypeTraits::AccessNext(ItemTypeTraits::AccessPrev(item)) = ItemTypeTraits::GetNext(item);
5602 }
5603 else
5604 {
5605 VMA_HEAVY_ASSERT(m_Front == item);
5606 m_Front = ItemTypeTraits::GetNext(item);
5607 }
5608
5609 if (ItemTypeTraits::GetNext(item) != VMA_NULL)
5610 {
5611 ItemTypeTraits::AccessPrev(ItemTypeTraits::AccessNext(item)) = ItemTypeTraits::GetPrev(item);
5612 }
5613 else
5614 {
5615 VMA_HEAVY_ASSERT(m_Back == item);
5616 m_Back = ItemTypeTraits::GetPrev(item);
5617 }
5618 ItemTypeTraits::AccessPrev(item) = VMA_NULL;
5619 ItemTypeTraits::AccessNext(item) = VMA_NULL;
5620 --m_Count;
5621}
5622
5623template<typename ItemTypeTraits>
5624void VmaIntrusiveLinkedList<ItemTypeTraits>::RemoveAll()
5625{
5626 if (!IsEmpty())
5627 {
5628 ItemType* item = m_Back;
5629 while (item != VMA_NULL)
5630 {
5631 ItemType* const prevItem = ItemTypeTraits::AccessPrev(item);
5632 ItemTypeTraits::AccessPrev(item) = VMA_NULL;
5633 ItemTypeTraits::AccessNext(item) = VMA_NULL;
5634 item = prevItem;
5635 }
5636 m_Front = VMA_NULL;
5637 m_Back = VMA_NULL;
5638 m_Count = 0;
5639 }
5640}
5641#endif // _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
5642#endif // _VMA_INTRUSIVE_LINKED_LIST
5643
5644#if !defined(_VMA_STRING_BUILDER) && VMA_STATS_STRING_ENABLED
5645class VmaStringBuilder
5646{
5647public:
5648 VmaStringBuilder(const VkAllocationCallbacks* allocationCallbacks) : m_Data(VmaStlAllocator<char>(allocationCallbacks)) {}
5649 ~VmaStringBuilder() = default;
5650
5651 size_t GetLength() const { return m_Data.size(); }
5652 const char* GetData() const { return m_Data.data(); }
5653 void AddNewLine() { Add(ch: '\n'); }
5654 void Add(char ch) { m_Data.push_back(src: ch); }
5655
5656 void Add(const char* pStr);
5657 void AddNumber(uint32_t num);
5658 void AddNumber(uint64_t num);
5659 void AddPointer(const void* ptr);
5660
5661private:
5662 VmaVector<char, VmaStlAllocator<char>> m_Data;
5663};
5664
5665#ifndef _VMA_STRING_BUILDER_FUNCTIONS
5666void VmaStringBuilder::Add(const char* pStr)
5667{
5668 const size_t strLen = strlen(s: pStr);
5669 if (strLen > 0)
5670 {
5671 const size_t oldCount = m_Data.size();
5672 m_Data.resize(newCount: oldCount + strLen);
5673 memcpy(dest: m_Data.data() + oldCount, src: pStr, n: strLen);
5674 }
5675}
5676
5677void VmaStringBuilder::AddNumber(uint32_t num)
5678{
5679 char buf[11];
5680 buf[10] = '\0';
5681 char* p = &buf[10];
5682 do
5683 {
5684 *--p = '0' + (char)(num % 10);
5685 num /= 10;
5686 } while (num);
5687 Add(pStr: p);
5688}
5689
5690void VmaStringBuilder::AddNumber(uint64_t num)
5691{
5692 char buf[21];
5693 buf[20] = '\0';
5694 char* p = &buf[20];
5695 do
5696 {
5697 *--p = '0' + (char)(num % 10);
5698 num /= 10;
5699 } while (num);
5700 Add(pStr: p);
5701}
5702
5703void VmaStringBuilder::AddPointer(const void* ptr)
5704{
5705 char buf[21];
5706 VmaPtrToStr(outStr: buf, strLen: sizeof(buf), ptr);
5707 Add(pStr: buf);
5708}
5709#endif //_VMA_STRING_BUILDER_FUNCTIONS
5710#endif // _VMA_STRING_BUILDER
5711
5712#if !defined(_VMA_JSON_WRITER) && VMA_STATS_STRING_ENABLED
5713/*
5714Allows to conveniently build a correct JSON document to be written to the
5715VmaStringBuilder passed to the constructor.
5716*/
5717class VmaJsonWriter
5718{
5719 VMA_CLASS_NO_COPY_NO_MOVE(VmaJsonWriter)
5720public:
5721 // sb - string builder to write the document to. Must remain alive for the whole lifetime of this object.
5722 VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
5723 ~VmaJsonWriter();
5724
5725 // Begins object by writing "{".
5726 // Inside an object, you must call pairs of WriteString and a value, e.g.:
5727 // j.BeginObject(true); j.WriteString("A"); j.WriteNumber(1); j.WriteString("B"); j.WriteNumber(2); j.EndObject();
5728 // Will write: { "A": 1, "B": 2 }
5729 void BeginObject(bool singleLine = false);
5730 // Ends object by writing "}".
5731 void EndObject();
5732
5733 // Begins array by writing "[".
5734 // Inside an array, you can write a sequence of any values.
5735 void BeginArray(bool singleLine = false);
5736 // Ends array by writing "[".
5737 void EndArray();
5738
5739 // Writes a string value inside "".
5740 // pStr can contain any ANSI characters, including '"', new line etc. - they will be properly escaped.
5741 void WriteString(const char* pStr);
5742
5743 // Begins writing a string value.
5744 // Call BeginString, ContinueString, ContinueString, ..., EndString instead of
5745 // WriteString to conveniently build the string content incrementally, made of
5746 // parts including numbers.
5747 void BeginString(const char* pStr = VMA_NULL);
5748 // Posts next part of an open string.
5749 void ContinueString(const char* pStr);
5750 // Posts next part of an open string. The number is converted to decimal characters.
5751 void ContinueString(uint32_t n);
5752 void ContinueString(uint64_t n);
5753 // Posts next part of an open string. Pointer value is converted to characters
5754 // using "%p" formatting - shown as hexadecimal number, e.g.: 000000081276Ad00
5755 void ContinueString_Pointer(const void* ptr);
5756 // Ends writing a string value by writing '"'.
5757 void EndString(const char* pStr = VMA_NULL);
5758
5759 // Writes a number value.
5760 void WriteNumber(uint32_t n);
5761 void WriteNumber(uint64_t n);
5762 // Writes a boolean value - false or true.
5763 void WriteBool(bool b);
5764 // Writes a null value.
5765 void WriteNull();
5766
5767private:
5768 enum COLLECTION_TYPE
5769 {
5770 COLLECTION_TYPE_OBJECT,
5771 COLLECTION_TYPE_ARRAY,
5772 };
5773 struct StackItem
5774 {
5775 COLLECTION_TYPE type;
5776 uint32_t valueCount;
5777 bool singleLineMode;
5778 };
5779
5780 static const char* const INDENT;
5781
5782 VmaStringBuilder& m_SB;
5783 VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
5784 bool m_InsideString;
5785
5786 void BeginValue(bool isString);
5787 void WriteIndent(bool oneLess = false);
5788};
5789const char* const VmaJsonWriter::INDENT = " ";
5790
5791#ifndef _VMA_JSON_WRITER_FUNCTIONS
5792VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb)
5793 : m_SB(sb),
5794 m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
5795 m_InsideString(false) {}
5796
5797VmaJsonWriter::~VmaJsonWriter()
5798{
5799 VMA_ASSERT(!m_InsideString);
5800 VMA_ASSERT(m_Stack.empty());
5801}
5802
5803void VmaJsonWriter::BeginObject(bool singleLine)
5804{
5805 VMA_ASSERT(!m_InsideString);
5806
5807 BeginValue(isString: false);
5808 m_SB.Add(ch: '{');
5809
5810 StackItem item;
5811 item.type = COLLECTION_TYPE_OBJECT;
5812 item.valueCount = 0;
5813 item.singleLineMode = singleLine;
5814 m_Stack.push_back(src: item);
5815}
5816
5817void VmaJsonWriter::EndObject()
5818{
5819 VMA_ASSERT(!m_InsideString);
5820
5821 WriteIndent(oneLess: true);
5822 m_SB.Add(ch: '}');
5823
5824 VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
5825 m_Stack.pop_back();
5826}
5827
5828void VmaJsonWriter::BeginArray(bool singleLine)
5829{
5830 VMA_ASSERT(!m_InsideString);
5831
5832 BeginValue(isString: false);
5833 m_SB.Add(ch: '[');
5834
5835 StackItem item;
5836 item.type = COLLECTION_TYPE_ARRAY;
5837 item.valueCount = 0;
5838 item.singleLineMode = singleLine;
5839 m_Stack.push_back(src: item);
5840}
5841
5842void VmaJsonWriter::EndArray()
5843{
5844 VMA_ASSERT(!m_InsideString);
5845
5846 WriteIndent(oneLess: true);
5847 m_SB.Add(ch: ']');
5848
5849 VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
5850 m_Stack.pop_back();
5851}
5852
5853void VmaJsonWriter::WriteString(const char* pStr)
5854{
5855 BeginString(pStr);
5856 EndString();
5857}
5858
5859void VmaJsonWriter::BeginString(const char* pStr)
5860{
5861 VMA_ASSERT(!m_InsideString);
5862
5863 BeginValue(isString: true);
5864 m_SB.Add(ch: '"');
5865 m_InsideString = true;
5866 if (pStr != VMA_NULL && pStr[0] != '\0')
5867 {
5868 ContinueString(pStr);
5869 }
5870}
5871
5872void VmaJsonWriter::ContinueString(const char* pStr)
5873{
5874 VMA_ASSERT(m_InsideString);
5875
5876 const size_t strLen = strlen(s: pStr);
5877 for (size_t i = 0; i < strLen; ++i)
5878 {
5879 char ch = pStr[i];
5880 if (ch == '\\')
5881 {
5882 m_SB.Add(pStr: "\\\\");
5883 }
5884 else if (ch == '"')
5885 {
5886 m_SB.Add(pStr: "\\\"");
5887 }
5888 else if ((uint8_t)ch >= 32)
5889 {
5890 m_SB.Add(ch);
5891 }
5892 else switch (ch)
5893 {
5894 case '\b':
5895 m_SB.Add(pStr: "\\b");
5896 break;
5897 case '\f':
5898 m_SB.Add(pStr: "\\f");
5899 break;
5900 case '\n':
5901 m_SB.Add(pStr: "\\n");
5902 break;
5903 case '\r':
5904 m_SB.Add(pStr: "\\r");
5905 break;
5906 case '\t':
5907 m_SB.Add(pStr: "\\t");
5908 break;
5909 default:
5910 VMA_ASSERT(0 && "Character not currently supported.");
5911 }
5912 }
5913}
5914
5915void VmaJsonWriter::ContinueString(uint32_t n)
5916{
5917 VMA_ASSERT(m_InsideString);
5918 m_SB.AddNumber(num: n);
5919}
5920
5921void VmaJsonWriter::ContinueString(uint64_t n)
5922{
5923 VMA_ASSERT(m_InsideString);
5924 m_SB.AddNumber(num: n);
5925}
5926
5927void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
5928{
5929 VMA_ASSERT(m_InsideString);
5930 m_SB.AddPointer(ptr);
5931}
5932
5933void VmaJsonWriter::EndString(const char* pStr)
5934{
5935 VMA_ASSERT(m_InsideString);
5936 if (pStr != VMA_NULL && pStr[0] != '\0')
5937 {
5938 ContinueString(pStr);
5939 }
5940 m_SB.Add(ch: '"');
5941 m_InsideString = false;
5942}
5943
5944void VmaJsonWriter::WriteNumber(uint32_t n)
5945{
5946 VMA_ASSERT(!m_InsideString);
5947 BeginValue(isString: false);
5948 m_SB.AddNumber(num: n);
5949}
5950
5951void VmaJsonWriter::WriteNumber(uint64_t n)
5952{
5953 VMA_ASSERT(!m_InsideString);
5954 BeginValue(isString: false);
5955 m_SB.AddNumber(num: n);
5956}
5957
5958void VmaJsonWriter::WriteBool(bool b)
5959{
5960 VMA_ASSERT(!m_InsideString);
5961 BeginValue(isString: false);
5962 m_SB.Add(pStr: b ? "true" : "false");
5963}
5964
5965void VmaJsonWriter::WriteNull()
5966{
5967 VMA_ASSERT(!m_InsideString);
5968 BeginValue(isString: false);
5969 m_SB.Add(pStr: "null");
5970}
5971
5972void VmaJsonWriter::BeginValue(bool isString)
5973{
5974 if (!m_Stack.empty())
5975 {
5976 StackItem& currItem = m_Stack.back();
5977 if (currItem.type == COLLECTION_TYPE_OBJECT &&
5978 currItem.valueCount % 2 == 0)
5979 {
5980 VMA_ASSERT(isString);
5981 }
5982
5983 if (currItem.type == COLLECTION_TYPE_OBJECT &&
5984 currItem.valueCount % 2 != 0)
5985 {
5986 m_SB.Add(pStr: ": ");
5987 }
5988 else if (currItem.valueCount > 0)
5989 {
5990 m_SB.Add(pStr: ", ");
5991 WriteIndent();
5992 }
5993 else
5994 {
5995 WriteIndent();
5996 }
5997 ++currItem.valueCount;
5998 }
5999}
6000
6001void VmaJsonWriter::WriteIndent(bool oneLess)
6002{
6003 if (!m_Stack.empty() && !m_Stack.back().singleLineMode)
6004 {
6005 m_SB.AddNewLine();
6006
6007 size_t count = m_Stack.size();
6008 if (count > 0 && oneLess)
6009 {
6010 --count;
6011 }
6012 for (size_t i = 0; i < count; ++i)
6013 {
6014 m_SB.Add(pStr: INDENT);
6015 }
6016 }
6017}
6018#endif // _VMA_JSON_WRITER_FUNCTIONS
6019
6020static void VmaPrintDetailedStatistics(VmaJsonWriter& json, const VmaDetailedStatistics& stat)
6021{
6022 json.BeginObject();
6023
6024 json.WriteString(pStr: "BlockCount");
6025 json.WriteNumber(n: stat.statistics.blockCount);
6026 json.WriteString(pStr: "BlockBytes");
6027 json.WriteNumber(n: stat.statistics.blockBytes);
6028 json.WriteString(pStr: "AllocationCount");
6029 json.WriteNumber(n: stat.statistics.allocationCount);
6030 json.WriteString(pStr: "AllocationBytes");
6031 json.WriteNumber(n: stat.statistics.allocationBytes);
6032 json.WriteString(pStr: "UnusedRangeCount");
6033 json.WriteNumber(n: stat.unusedRangeCount);
6034
6035 if (stat.statistics.allocationCount > 1)
6036 {
6037 json.WriteString(pStr: "AllocationSizeMin");
6038 json.WriteNumber(n: stat.allocationSizeMin);
6039 json.WriteString(pStr: "AllocationSizeMax");
6040 json.WriteNumber(n: stat.allocationSizeMax);
6041 }
6042 if (stat.unusedRangeCount > 1)
6043 {
6044 json.WriteString(pStr: "UnusedRangeSizeMin");
6045 json.WriteNumber(n: stat.unusedRangeSizeMin);
6046 json.WriteString(pStr: "UnusedRangeSizeMax");
6047 json.WriteNumber(n: stat.unusedRangeSizeMax);
6048 }
6049 json.EndObject();
6050}
6051#endif // _VMA_JSON_WRITER
6052
6053#ifndef _VMA_MAPPING_HYSTERESIS
6054
6055class VmaMappingHysteresis
6056{
6057 VMA_CLASS_NO_COPY_NO_MOVE(VmaMappingHysteresis)
6058public:
6059 VmaMappingHysteresis() = default;
6060
6061 uint32_t GetExtraMapping() const { return m_ExtraMapping; }
6062
6063 // Call when Map was called.
6064 // Returns true if switched to extra +1 mapping reference count.
6065 bool PostMap()
6066 {
6067#if VMA_MAPPING_HYSTERESIS_ENABLED
6068 if(m_ExtraMapping == 0)
6069 {
6070 ++m_MajorCounter;
6071 if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING)
6072 {
6073 m_ExtraMapping = 1;
6074 m_MajorCounter = 0;
6075 m_MinorCounter = 0;
6076 return true;
6077 }
6078 }
6079 else // m_ExtraMapping == 1
6080 PostMinorCounter();
6081#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
6082 return false;
6083 }
6084
6085 // Call when Unmap was called.
6086 void PostUnmap()
6087 {
6088#if VMA_MAPPING_HYSTERESIS_ENABLED
6089 if(m_ExtraMapping == 0)
6090 ++m_MajorCounter;
6091 else // m_ExtraMapping == 1
6092 PostMinorCounter();
6093#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
6094 }
6095
6096 // Call when allocation was made from the memory block.
6097 void PostAlloc()
6098 {
6099#if VMA_MAPPING_HYSTERESIS_ENABLED
6100 if(m_ExtraMapping == 1)
6101 ++m_MajorCounter;
6102 else // m_ExtraMapping == 0
6103 PostMinorCounter();
6104#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
6105 }
6106
6107 // Call when allocation was freed from the memory block.
6108 // Returns true if switched to extra -1 mapping reference count.
6109 bool PostFree()
6110 {
6111#if VMA_MAPPING_HYSTERESIS_ENABLED
6112 if(m_ExtraMapping == 1)
6113 {
6114 ++m_MajorCounter;
6115 if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING &&
6116 m_MajorCounter > m_MinorCounter + 1)
6117 {
6118 m_ExtraMapping = 0;
6119 m_MajorCounter = 0;
6120 m_MinorCounter = 0;
6121 return true;
6122 }
6123 }
6124 else // m_ExtraMapping == 0
6125 PostMinorCounter();
6126#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
6127 return false;
6128 }
6129
6130private:
6131 static const int32_t COUNTER_MIN_EXTRA_MAPPING = 7;
6132
6133 uint32_t m_MinorCounter = 0;
6134 uint32_t m_MajorCounter = 0;
6135 uint32_t m_ExtraMapping = 0; // 0 or 1.
6136
6137 void PostMinorCounter()
6138 {
6139 if(m_MinorCounter < m_MajorCounter)
6140 {
6141 ++m_MinorCounter;
6142 }
6143 else if(m_MajorCounter > 0)
6144 {
6145 --m_MajorCounter;
6146 --m_MinorCounter;
6147 }
6148 }
6149};
6150
6151#endif // _VMA_MAPPING_HYSTERESIS
6152
6153#if VMA_EXTERNAL_MEMORY_WIN32
6154class VmaWin32Handle
6155{
6156public:
6157 VmaWin32Handle() noexcept : m_hHandle(VMA_NULL) { }
6158 explicit VmaWin32Handle(HANDLE hHandle) noexcept : m_hHandle(hHandle) { }
6159 ~VmaWin32Handle() noexcept { if (m_hHandle != VMA_NULL) { ::CloseHandle(m_hHandle); } }
6160 VMA_CLASS_NO_COPY_NO_MOVE(VmaWin32Handle)
6161
6162public:
6163 // Strengthened
6164 VkResult GetHandle(VkDevice device, VkDeviceMemory memory, PFN_vkGetMemoryWin32HandleKHR pvkGetMemoryWin32HandleKHR, HANDLE hTargetProcess, bool useMutex, HANDLE* pHandle) noexcept
6165 {
6166 *pHandle = VMA_NULL;
6167 // Try to get handle first.
6168 if (m_hHandle != VMA_NULL)
6169 {
6170 *pHandle = Duplicate(hTargetProcess);
6171 return VK_SUCCESS;
6172 }
6173
6174 VkResult res = VK_SUCCESS;
6175 // If failed, try to create it.
6176 {
6177 VmaMutexLockWrite lock(m_Mutex, useMutex);
6178 if (m_hHandle == VMA_NULL)
6179 {
6180 res = Create(device, memory, pvkGetMemoryWin32HandleKHR, &m_hHandle);
6181 }
6182 }
6183
6184 *pHandle = Duplicate(hTargetProcess);
6185 return res;
6186 }
6187
6188 operator bool() const noexcept { return m_hHandle != VMA_NULL; }
6189private:
6190 // Not atomic
6191 static VkResult Create(VkDevice device, VkDeviceMemory memory, PFN_vkGetMemoryWin32HandleKHR pvkGetMemoryWin32HandleKHR, HANDLE* pHandle) noexcept
6192 {
6193 VkResult res = VK_ERROR_FEATURE_NOT_PRESENT;
6194 if (pvkGetMemoryWin32HandleKHR != VMA_NULL)
6195 {
6196 VkMemoryGetWin32HandleInfoKHR handleInfo{ };
6197 handleInfo.sType = VK_STRUCTURE_TYPE_MEMORY_GET_WIN32_HANDLE_INFO_KHR;
6198 handleInfo.memory = memory;
6199 handleInfo.handleType = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHR;
6200 res = pvkGetMemoryWin32HandleKHR(device, &handleInfo, pHandle);
6201 }
6202 return res;
6203 }
6204 HANDLE Duplicate(HANDLE hTargetProcess = VMA_NULL) const noexcept
6205 {
6206 if (!m_hHandle)
6207 return m_hHandle;
6208
6209 HANDLE hCurrentProcess = ::GetCurrentProcess();
6210 HANDLE hDupHandle = VMA_NULL;
6211 if (!::DuplicateHandle(hCurrentProcess, m_hHandle, hTargetProcess ? hTargetProcess : hCurrentProcess, &hDupHandle, 0, FALSE, DUPLICATE_SAME_ACCESS))
6212 {
6213 VMA_ASSERT(0 && "Failed to duplicate handle.");
6214 }
6215 return hDupHandle;
6216 }
6217private:
6218 HANDLE m_hHandle;
6219 VMA_RW_MUTEX m_Mutex; // Protects access m_Handle
6220};
6221#else
6222class VmaWin32Handle
6223{
6224 // ABI compatibility
6225 void* placeholder = VMA_NULL;
6226 VMA_RW_MUTEX placeholder2;
6227};
6228#endif // VMA_EXTERNAL_MEMORY_WIN32
6229
6230
6231#ifndef _VMA_DEVICE_MEMORY_BLOCK
6232/*
6233Represents a single block of device memory (`VkDeviceMemory`) with all the
6234data about its regions (aka suballocations, #VmaAllocation), assigned and free.
6235
6236Thread-safety:
6237- Access to m_pMetadata must be externally synchronized.
6238- Map, Unmap, Bind* are synchronized internally.
6239*/
6240class VmaDeviceMemoryBlock
6241{
6242 VMA_CLASS_NO_COPY_NO_MOVE(VmaDeviceMemoryBlock)
6243public:
6244 VmaBlockMetadata* m_pMetadata;
6245
6246 VmaDeviceMemoryBlock(VmaAllocator hAllocator);
6247 ~VmaDeviceMemoryBlock();
6248
6249 // Always call after construction.
6250 void Init(
6251 VmaAllocator hAllocator,
6252 VmaPool hParentPool,
6253 uint32_t newMemoryTypeIndex,
6254 VkDeviceMemory newMemory,
6255 VkDeviceSize newSize,
6256 uint32_t id,
6257 uint32_t algorithm,
6258 VkDeviceSize bufferImageGranularity);
6259 // Always call before destruction.
6260 void Destroy(VmaAllocator allocator);
6261
6262 VmaPool GetParentPool() const { return m_hParentPool; }
6263 VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
6264 uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
6265 uint32_t GetId() const { return m_Id; }
6266 void* GetMappedData() const { return m_pMappedData; }
6267 uint32_t GetMapRefCount() const { return m_MapCount; }
6268
6269 // Call when allocation/free was made from m_pMetadata.
6270 // Used for m_MappingHysteresis.
6271 void PostAlloc(VmaAllocator hAllocator);
6272 void PostFree(VmaAllocator hAllocator);
6273
6274 // Validates all data structures inside this object. If not valid, returns false.
6275 bool Validate() const;
6276 VkResult CheckCorruption(VmaAllocator hAllocator);
6277
6278 // ppData can be null.
6279 VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
6280 void Unmap(VmaAllocator hAllocator, uint32_t count);
6281
6282 VkResult WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
6283 VkResult ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
6284
6285 VkResult BindBufferMemory(
6286 const VmaAllocator hAllocator,
6287 const VmaAllocation hAllocation,
6288 VkDeviceSize allocationLocalOffset,
6289 VkBuffer hBuffer,
6290 const void* pNext);
6291 VkResult BindImageMemory(
6292 const VmaAllocator hAllocator,
6293 const VmaAllocation hAllocation,
6294 VkDeviceSize allocationLocalOffset,
6295 VkImage hImage,
6296 const void* pNext);
6297#if VMA_EXTERNAL_MEMORY_WIN32
6298 VkResult CreateWin32Handle(
6299 const VmaAllocator hAllocator,
6300 PFN_vkGetMemoryWin32HandleKHR pvkGetMemoryWin32HandleKHR,
6301 HANDLE hTargetProcess,
6302 HANDLE* pHandle)noexcept;
6303#endif // VMA_EXTERNAL_MEMORY_WIN32
6304private:
6305 VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
6306 uint32_t m_MemoryTypeIndex;
6307 uint32_t m_Id;
6308 VkDeviceMemory m_hMemory;
6309
6310 /*
6311 Protects access to m_hMemory so it is not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
6312 Also protects m_MapCount, m_pMappedData.
6313 Allocations, deallocations, any change in m_pMetadata is protected by parent's VmaBlockVector::m_Mutex.
6314 */
6315 VMA_MUTEX m_MapAndBindMutex;
6316 VmaMappingHysteresis m_MappingHysteresis;
6317 uint32_t m_MapCount;
6318 void* m_pMappedData;
6319
6320 VmaWin32Handle m_Handle;
6321};
6322#endif // _VMA_DEVICE_MEMORY_BLOCK
6323
6324#ifndef _VMA_ALLOCATION_T
6325struct VmaAllocationExtraData
6326{
6327 void* m_pMappedData = VMA_NULL; // Not null means memory is mapped.
6328 VmaWin32Handle m_Handle;
6329};
6330
6331struct VmaAllocation_T
6332{
6333 friend struct VmaDedicatedAllocationListItemTraits;
6334
6335 enum FLAGS
6336 {
6337 FLAG_PERSISTENT_MAP = 0x01,
6338 FLAG_MAPPING_ALLOWED = 0x02,
6339 };
6340
6341public:
6342 enum ALLOCATION_TYPE
6343 {
6344 ALLOCATION_TYPE_NONE,
6345 ALLOCATION_TYPE_BLOCK,
6346 ALLOCATION_TYPE_DEDICATED,
6347 };
6348
6349 // This struct is allocated using VmaPoolAllocator.
6350 VmaAllocation_T(bool mappingAllowed);
6351 ~VmaAllocation_T();
6352
6353 void InitBlockAllocation(
6354 VmaDeviceMemoryBlock* block,
6355 VmaAllocHandle allocHandle,
6356 VkDeviceSize alignment,
6357 VkDeviceSize size,
6358 uint32_t memoryTypeIndex,
6359 VmaSuballocationType suballocationType,
6360 bool mapped);
6361 // pMappedData not null means allocation is created with MAPPED flag.
6362 void InitDedicatedAllocation(
6363 VmaAllocator allocator,
6364 VmaPool hParentPool,
6365 uint32_t memoryTypeIndex,
6366 VkDeviceMemory hMemory,
6367 VmaSuballocationType suballocationType,
6368 void* pMappedData,
6369 VkDeviceSize size);
6370 void Destroy(VmaAllocator allocator);
6371
6372 ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
6373 VkDeviceSize GetAlignment() const { return m_Alignment; }
6374 VkDeviceSize GetSize() const { return m_Size; }
6375 void* GetUserData() const { return m_pUserData; }
6376 const char* GetName() const { return m_pName; }
6377 VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
6378
6379 VmaDeviceMemoryBlock* GetBlock() const { VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK); return m_BlockAllocation.m_Block; }
6380 uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
6381 bool IsPersistentMap() const { return (m_Flags & FLAG_PERSISTENT_MAP) != 0; }
6382 bool IsMappingAllowed() const { return (m_Flags & FLAG_MAPPING_ALLOWED) != 0; }
6383
6384 void SetUserData(VmaAllocator hAllocator, void* pUserData) { m_pUserData = pUserData; }
6385 void SetName(VmaAllocator hAllocator, const char* pName);
6386 void FreeName(VmaAllocator hAllocator);
6387 uint8_t SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation);
6388 VmaAllocHandle GetAllocHandle() const;
6389 VkDeviceSize GetOffset() const;
6390 VmaPool GetParentPool() const;
6391 VkDeviceMemory GetMemory() const;
6392 void* GetMappedData() const;
6393
6394 void BlockAllocMap();
6395 void BlockAllocUnmap();
6396 VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
6397 void DedicatedAllocUnmap(VmaAllocator hAllocator);
6398
6399#if VMA_STATS_STRING_ENABLED
6400 VmaBufferImageUsage GetBufferImageUsage() const { return m_BufferImageUsage; }
6401 void InitBufferUsage(const VkBufferCreateInfo &createInfo, bool useKhrMaintenance5)
6402 {
6403 VMA_ASSERT(m_BufferImageUsage == VmaBufferImageUsage::UNKNOWN);
6404 m_BufferImageUsage = VmaBufferImageUsage(createInfo, useKhrMaintenance5);
6405 }
6406 void InitImageUsage(const VkImageCreateInfo &createInfo)
6407 {
6408 VMA_ASSERT(m_BufferImageUsage == VmaBufferImageUsage::UNKNOWN);
6409 m_BufferImageUsage = VmaBufferImageUsage(createInfo);
6410 }
6411 void PrintParameters(class VmaJsonWriter& json) const;
6412#endif
6413
6414#if VMA_EXTERNAL_MEMORY_WIN32
6415 VkResult GetWin32Handle(VmaAllocator hAllocator, HANDLE hTargetProcess, HANDLE* hHandle) noexcept;
6416#endif // VMA_EXTERNAL_MEMORY_WIN32
6417
6418private:
6419 // Allocation out of VmaDeviceMemoryBlock.
6420 struct BlockAllocation
6421 {
6422 VmaDeviceMemoryBlock* m_Block;
6423 VmaAllocHandle m_AllocHandle;
6424 };
6425 // Allocation for an object that has its own private VkDeviceMemory.
6426 struct DedicatedAllocation
6427 {
6428 VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
6429 VkDeviceMemory m_hMemory;
6430 VmaAllocationExtraData* m_ExtraData;
6431 VmaAllocation_T* m_Prev;
6432 VmaAllocation_T* m_Next;
6433 };
6434 union
6435 {
6436 // Allocation out of VmaDeviceMemoryBlock.
6437 BlockAllocation m_BlockAllocation;
6438 // Allocation for an object that has its own private VkDeviceMemory.
6439 DedicatedAllocation m_DedicatedAllocation;
6440 };
6441
6442 VkDeviceSize m_Alignment;
6443 VkDeviceSize m_Size;
6444 void* m_pUserData;
6445 char* m_pName;
6446 uint32_t m_MemoryTypeIndex;
6447 uint8_t m_Type; // ALLOCATION_TYPE
6448 uint8_t m_SuballocationType; // VmaSuballocationType
6449 // Reference counter for vmaMapMemory()/vmaUnmapMemory().
6450 uint8_t m_MapCount;
6451 uint8_t m_Flags; // enum FLAGS
6452#if VMA_STATS_STRING_ENABLED
6453 VmaBufferImageUsage m_BufferImageUsage; // 0 if unknown.
6454#endif
6455
6456 void EnsureExtraData(VmaAllocator hAllocator);
6457};
6458#endif // _VMA_ALLOCATION_T
6459
6460#ifndef _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
6461struct VmaDedicatedAllocationListItemTraits
6462{
6463 typedef VmaAllocation_T ItemType;
6464
6465 static ItemType* GetPrev(const ItemType* item)
6466 {
6467 VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6468 return item->m_DedicatedAllocation.m_Prev;
6469 }
6470 static ItemType* GetNext(const ItemType* item)
6471 {
6472 VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6473 return item->m_DedicatedAllocation.m_Next;
6474 }
6475 static ItemType*& AccessPrev(ItemType* item)
6476 {
6477 VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6478 return item->m_DedicatedAllocation.m_Prev;
6479 }
6480 static ItemType*& AccessNext(ItemType* item)
6481 {
6482 VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6483 return item->m_DedicatedAllocation.m_Next;
6484 }
6485};
6486#endif // _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
6487
6488#ifndef _VMA_DEDICATED_ALLOCATION_LIST
6489/*
6490Stores linked list of VmaAllocation_T objects.
6491Thread-safe, synchronized internally.
6492*/
6493class VmaDedicatedAllocationList
6494{
6495 VMA_CLASS_NO_COPY_NO_MOVE(VmaDedicatedAllocationList)
6496public:
6497 VmaDedicatedAllocationList() {}
6498 ~VmaDedicatedAllocationList();
6499
6500 void Init(bool useMutex) { m_UseMutex = useMutex; }
6501 bool Validate();
6502
6503 void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
6504 void AddStatistics(VmaStatistics& inoutStats);
6505#if VMA_STATS_STRING_ENABLED
6506 // Writes JSON array with the list of allocations.
6507 void BuildStatsString(VmaJsonWriter& json);
6508#endif
6509
6510 bool IsEmpty();
6511 void Register(VmaAllocation alloc);
6512 void Unregister(VmaAllocation alloc);
6513
6514private:
6515 typedef VmaIntrusiveLinkedList<VmaDedicatedAllocationListItemTraits> DedicatedAllocationLinkedList;
6516
6517 bool m_UseMutex = true;
6518 VMA_RW_MUTEX m_Mutex;
6519 DedicatedAllocationLinkedList m_AllocationList;
6520};
6521
6522#ifndef _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
6523
6524VmaDedicatedAllocationList::~VmaDedicatedAllocationList()
6525{
6526 VMA_HEAVY_ASSERT(Validate());
6527
6528 if (!m_AllocationList.IsEmpty())
6529 {
6530 VMA_ASSERT_LEAK(false && "Unfreed dedicated allocations found!");
6531 }
6532}
6533
6534bool VmaDedicatedAllocationList::Validate()
6535{
6536 const size_t declaredCount = m_AllocationList.GetCount();
6537 size_t actualCount = 0;
6538 VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6539 for (VmaAllocation alloc = m_AllocationList.Front();
6540 alloc != VMA_NULL; alloc = m_AllocationList.GetNext(item: alloc))
6541 {
6542 ++actualCount;
6543 }
6544 VMA_VALIDATE(actualCount == declaredCount);
6545
6546 return true;
6547}
6548
6549void VmaDedicatedAllocationList::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
6550{
6551 for(auto* item = m_AllocationList.Front(); item != VMA_NULL; item = DedicatedAllocationLinkedList::GetNext(item))
6552 {
6553 const VkDeviceSize size = item->GetSize();
6554 inoutStats.statistics.blockCount++;
6555 inoutStats.statistics.blockBytes += size;
6556 VmaAddDetailedStatisticsAllocation(inoutStats, size: item->GetSize());
6557 }
6558}
6559
6560void VmaDedicatedAllocationList::AddStatistics(VmaStatistics& inoutStats)
6561{
6562 VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6563
6564 const uint32_t allocCount = (uint32_t)m_AllocationList.GetCount();
6565 inoutStats.blockCount += allocCount;
6566 inoutStats.allocationCount += allocCount;
6567
6568 for(auto* item = m_AllocationList.Front(); item != VMA_NULL; item = DedicatedAllocationLinkedList::GetNext(item))
6569 {
6570 const VkDeviceSize size = item->GetSize();
6571 inoutStats.blockBytes += size;
6572 inoutStats.allocationBytes += size;
6573 }
6574}
6575
6576#if VMA_STATS_STRING_ENABLED
6577void VmaDedicatedAllocationList::BuildStatsString(VmaJsonWriter& json)
6578{
6579 VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6580 json.BeginArray();
6581 for (VmaAllocation alloc = m_AllocationList.Front();
6582 alloc != VMA_NULL; alloc = m_AllocationList.GetNext(item: alloc))
6583 {
6584 json.BeginObject(singleLine: true);
6585 alloc->PrintParameters(json);
6586 json.EndObject();
6587 }
6588 json.EndArray();
6589}
6590#endif // VMA_STATS_STRING_ENABLED
6591
6592bool VmaDedicatedAllocationList::IsEmpty()
6593{
6594 VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6595 return m_AllocationList.IsEmpty();
6596}
6597
6598void VmaDedicatedAllocationList::Register(VmaAllocation alloc)
6599{
6600 VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
6601 m_AllocationList.PushBack(item: alloc);
6602}
6603
6604void VmaDedicatedAllocationList::Unregister(VmaAllocation alloc)
6605{
6606 VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
6607 m_AllocationList.Remove(item: alloc);
6608}
6609#endif // _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
6610#endif // _VMA_DEDICATED_ALLOCATION_LIST
6611
6612#ifndef _VMA_SUBALLOCATION
6613/*
6614Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
6615allocated memory block or free.
6616*/
6617struct VmaSuballocation
6618{
6619 VkDeviceSize offset;
6620 VkDeviceSize size;
6621 void* userData;
6622 VmaSuballocationType type;
6623};
6624
6625// Comparator for offsets.
6626struct VmaSuballocationOffsetLess
6627{
6628 bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
6629 {
6630 return lhs.offset < rhs.offset;
6631 }
6632};
6633
6634struct VmaSuballocationOffsetGreater
6635{
6636 bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
6637 {
6638 return lhs.offset > rhs.offset;
6639 }
6640};
6641
6642struct VmaSuballocationItemSizeLess
6643{
6644 bool operator()(const VmaSuballocationList::iterator lhs,
6645 const VmaSuballocationList::iterator rhs) const
6646 {
6647 return lhs->size < rhs->size;
6648 }
6649
6650 bool operator()(const VmaSuballocationList::iterator lhs,
6651 VkDeviceSize rhsSize) const
6652 {
6653 return lhs->size < rhsSize;
6654 }
6655};
6656#endif // _VMA_SUBALLOCATION
6657
6658#ifndef _VMA_ALLOCATION_REQUEST
6659/*
6660Parameters of planned allocation inside a VmaDeviceMemoryBlock.
6661item points to a FREE suballocation.
6662*/
6663struct VmaAllocationRequest
6664{
6665 VmaAllocHandle allocHandle;
6666 VkDeviceSize size;
6667 VmaSuballocationList::iterator item;
6668 void* customData;
6669 uint64_t algorithmData;
6670 VmaAllocationRequestType type;
6671};
6672#endif // _VMA_ALLOCATION_REQUEST
6673
6674#ifndef _VMA_BLOCK_METADATA
6675/*
6676Data structure used for bookkeeping of allocations and unused ranges of memory
6677in a single VkDeviceMemory block.
6678*/
6679class VmaBlockMetadata
6680{
6681 VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata)
6682public:
6683 // pAllocationCallbacks, if not null, must be owned externally - alive and unchanged for the whole lifetime of this object.
6684 VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
6685 VkDeviceSize bufferImageGranularity, bool isVirtual);
6686 virtual ~VmaBlockMetadata() = default;
6687
6688 virtual void Init(VkDeviceSize size) { m_Size = size; }
6689 bool IsVirtual() const { return m_IsVirtual; }
6690 VkDeviceSize GetSize() const { return m_Size; }
6691
6692 // Validates all data structures inside this object. If not valid, returns false.
6693 virtual bool Validate() const = 0;
6694 virtual size_t GetAllocationCount() const = 0;
6695 virtual size_t GetFreeRegionsCount() const = 0;
6696 virtual VkDeviceSize GetSumFreeSize() const = 0;
6697 // Returns true if this block is empty - contains only single free suballocation.
6698 virtual bool IsEmpty() const = 0;
6699 virtual void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) = 0;
6700 virtual VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const = 0;
6701 virtual void* GetAllocationUserData(VmaAllocHandle allocHandle) const = 0;
6702
6703 virtual VmaAllocHandle GetAllocationListBegin() const = 0;
6704 virtual VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const = 0;
6705 virtual VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const = 0;
6706
6707 // Shouldn't modify blockCount.
6708 virtual void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const = 0;
6709 virtual void AddStatistics(VmaStatistics& inoutStats) const = 0;
6710
6711#if VMA_STATS_STRING_ENABLED
6712 virtual void PrintDetailedMap(class VmaJsonWriter& json) const = 0;
6713#endif
6714
6715 // Tries to find a place for suballocation with given parameters inside this block.
6716 // If succeeded, fills pAllocationRequest and returns true.
6717 // If failed, returns false.
6718 virtual bool CreateAllocationRequest(
6719 VkDeviceSize allocSize,
6720 VkDeviceSize allocAlignment,
6721 bool upperAddress,
6722 VmaSuballocationType allocType,
6723 // Always one of VMA_ALLOCATION_CREATE_STRATEGY_* or VMA_ALLOCATION_INTERNAL_STRATEGY_* flags.
6724 uint32_t strategy,
6725 VmaAllocationRequest* pAllocationRequest) = 0;
6726
6727 virtual VkResult CheckCorruption(const void* pBlockData) = 0;
6728
6729 // Makes actual allocation based on request. Request must already be checked and valid.
6730 virtual void Alloc(
6731 const VmaAllocationRequest& request,
6732 VmaSuballocationType type,
6733 void* userData) = 0;
6734
6735 // Frees suballocation assigned to given memory region.
6736 virtual void Free(VmaAllocHandle allocHandle) = 0;
6737
6738 // Frees all allocations.
6739 // Careful! Don't call it if there are VmaAllocation objects owned by userData of cleared allocations!
6740 virtual void Clear() = 0;
6741
6742 virtual void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) = 0;
6743 virtual void DebugLogAllAllocations() const = 0;
6744
6745protected:
6746 const VkAllocationCallbacks* GetAllocationCallbacks() const { return m_pAllocationCallbacks; }
6747 VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
6748 VkDeviceSize GetDebugMargin() const { return VkDeviceSize(IsVirtual() ? 0 : VMA_DEBUG_MARGIN); }
6749
6750 void DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const;
6751#if VMA_STATS_STRING_ENABLED
6752 // mapRefCount == UINT32_MAX means unspecified.
6753 void PrintDetailedMap_Begin(class VmaJsonWriter& json,
6754 VkDeviceSize unusedBytes,
6755 size_t allocationCount,
6756 size_t unusedRangeCount) const;
6757 void PrintDetailedMap_Allocation(class VmaJsonWriter& json,
6758 VkDeviceSize offset, VkDeviceSize size, void* userData) const;
6759 void PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
6760 VkDeviceSize offset,
6761 VkDeviceSize size) const;
6762 void PrintDetailedMap_End(class VmaJsonWriter& json) const;
6763#endif
6764
6765private:
6766 VkDeviceSize m_Size;
6767 const VkAllocationCallbacks* m_pAllocationCallbacks;
6768 const VkDeviceSize m_BufferImageGranularity;
6769 const bool m_IsVirtual;
6770};
6771
6772#ifndef _VMA_BLOCK_METADATA_FUNCTIONS
6773VmaBlockMetadata::VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
6774 VkDeviceSize bufferImageGranularity, bool isVirtual)
6775 : m_Size(0),
6776 m_pAllocationCallbacks(pAllocationCallbacks),
6777 m_BufferImageGranularity(bufferImageGranularity),
6778 m_IsVirtual(isVirtual) {}
6779
6780void VmaBlockMetadata::DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const
6781{
6782 if (IsVirtual())
6783 {
6784 VMA_LEAK_LOG_FORMAT("UNFREED VIRTUAL ALLOCATION; Offset: %" PRIu64 "; Size: %" PRIu64 "; UserData: %p", offset, size, userData);
6785 }
6786 else
6787 {
6788 VMA_ASSERT(userData != VMA_NULL);
6789 VmaAllocation allocation = reinterpret_cast<VmaAllocation>(userData);
6790
6791 userData = allocation->GetUserData();
6792 const char* name = allocation->GetName();
6793
6794#if VMA_STATS_STRING_ENABLED
6795 VMA_LEAK_LOG_FORMAT("UNFREED ALLOCATION; Offset: %" PRIu64 "; Size: %" PRIu64 "; UserData: %p; Name: %s; Type: %s; Usage: %" PRIu64,
6796 offset, size, userData, name ? name : "vma_empty",
6797 VMA_SUBALLOCATION_TYPE_NAMES[allocation->GetSuballocationType()],
6798 (uint64_t)allocation->GetBufferImageUsage().Value);
6799#else
6800 VMA_LEAK_LOG_FORMAT("UNFREED ALLOCATION; Offset: %" PRIu64 "; Size: %" PRIu64 "; UserData: %p; Name: %s; Type: %u",
6801 offset, size, userData, name ? name : "vma_empty",
6802 (unsigned)allocation->GetSuballocationType());
6803#endif // VMA_STATS_STRING_ENABLED
6804 }
6805
6806}
6807
6808#if VMA_STATS_STRING_ENABLED
6809void VmaBlockMetadata::PrintDetailedMap_Begin(class VmaJsonWriter& json,
6810 VkDeviceSize unusedBytes, size_t allocationCount, size_t unusedRangeCount) const
6811{
6812 json.WriteString(pStr: "TotalBytes");
6813 json.WriteNumber(n: GetSize());
6814
6815 json.WriteString(pStr: "UnusedBytes");
6816 json.WriteNumber(n: unusedBytes);
6817
6818 json.WriteString(pStr: "Allocations");
6819 json.WriteNumber(n: (uint64_t)allocationCount);
6820
6821 json.WriteString(pStr: "UnusedRanges");
6822 json.WriteNumber(n: (uint64_t)unusedRangeCount);
6823
6824 json.WriteString(pStr: "Suballocations");
6825 json.BeginArray();
6826}
6827
6828void VmaBlockMetadata::PrintDetailedMap_Allocation(class VmaJsonWriter& json,
6829 VkDeviceSize offset, VkDeviceSize size, void* userData) const
6830{
6831 json.BeginObject(singleLine: true);
6832
6833 json.WriteString(pStr: "Offset");
6834 json.WriteNumber(n: offset);
6835
6836 if (IsVirtual())
6837 {
6838 json.WriteString(pStr: "Size");
6839 json.WriteNumber(n: size);
6840 if (userData)
6841 {
6842 json.WriteString(pStr: "CustomData");
6843 json.BeginString();
6844 json.ContinueString_Pointer(ptr: userData);
6845 json.EndString();
6846 }
6847 }
6848 else
6849 {
6850 ((VmaAllocation)userData)->PrintParameters(json);
6851 }
6852
6853 json.EndObject();
6854}
6855
6856void VmaBlockMetadata::PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
6857 VkDeviceSize offset, VkDeviceSize size) const
6858{
6859 json.BeginObject(singleLine: true);
6860
6861 json.WriteString(pStr: "Offset");
6862 json.WriteNumber(n: offset);
6863
6864 json.WriteString(pStr: "Type");
6865 json.WriteString(pStr: VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
6866
6867 json.WriteString(pStr: "Size");
6868 json.WriteNumber(n: size);
6869
6870 json.EndObject();
6871}
6872
6873void VmaBlockMetadata::PrintDetailedMap_End(class VmaJsonWriter& json) const
6874{
6875 json.EndArray();
6876}
6877#endif // VMA_STATS_STRING_ENABLED
6878#endif // _VMA_BLOCK_METADATA_FUNCTIONS
6879#endif // _VMA_BLOCK_METADATA
6880
6881#ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
6882// Before deleting object of this class remember to call 'Destroy()'
6883class VmaBlockBufferImageGranularity final
6884{
6885public:
6886 struct ValidationContext
6887 {
6888 const VkAllocationCallbacks* allocCallbacks;
6889 uint16_t* pageAllocs;
6890 };
6891
6892 VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity);
6893 ~VmaBlockBufferImageGranularity();
6894
6895 bool IsEnabled() const { return m_BufferImageGranularity > MAX_LOW_BUFFER_IMAGE_GRANULARITY; }
6896
6897 void Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size);
6898 // Before destroying object you must call free it's memory
6899 void Destroy(const VkAllocationCallbacks* pAllocationCallbacks);
6900
6901 void RoundupAllocRequest(VmaSuballocationType allocType,
6902 VkDeviceSize& inOutAllocSize,
6903 VkDeviceSize& inOutAllocAlignment) const;
6904
6905 bool CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
6906 VkDeviceSize allocSize,
6907 VkDeviceSize blockOffset,
6908 VkDeviceSize blockSize,
6909 VmaSuballocationType allocType) const;
6910
6911 void AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size);
6912 void FreePages(VkDeviceSize offset, VkDeviceSize size);
6913 void Clear();
6914
6915 ValidationContext StartValidation(const VkAllocationCallbacks* pAllocationCallbacks,
6916 bool isVirutal) const;
6917 bool Validate(ValidationContext& ctx, VkDeviceSize offset, VkDeviceSize size) const;
6918 bool FinishValidation(ValidationContext& ctx) const;
6919
6920private:
6921 static const uint16_t MAX_LOW_BUFFER_IMAGE_GRANULARITY = 256;
6922
6923 struct RegionInfo
6924 {
6925 uint8_t allocType;
6926 uint16_t allocCount;
6927 };
6928
6929 VkDeviceSize m_BufferImageGranularity;
6930 uint32_t m_RegionCount;
6931 RegionInfo* m_RegionInfo;
6932
6933 uint32_t GetStartPage(VkDeviceSize offset) const { return OffsetToPageIndex(offset: offset & ~(m_BufferImageGranularity - 1)); }
6934 uint32_t GetEndPage(VkDeviceSize offset, VkDeviceSize size) const { return OffsetToPageIndex(offset: (offset + size - 1) & ~(m_BufferImageGranularity - 1)); }
6935
6936 uint32_t OffsetToPageIndex(VkDeviceSize offset) const;
6937 void AllocPage(RegionInfo& page, uint8_t allocType);
6938};
6939
6940#ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
6941VmaBlockBufferImageGranularity::VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity)
6942 : m_BufferImageGranularity(bufferImageGranularity),
6943 m_RegionCount(0),
6944 m_RegionInfo(VMA_NULL) {}
6945
6946VmaBlockBufferImageGranularity::~VmaBlockBufferImageGranularity()
6947{
6948 VMA_ASSERT(m_RegionInfo == VMA_NULL && "Free not called before destroying object!");
6949}
6950
6951void VmaBlockBufferImageGranularity::Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size)
6952{
6953 if (IsEnabled())
6954 {
6955 m_RegionCount = static_cast<uint32_t>(VmaDivideRoundingUp(x: size, y: m_BufferImageGranularity));
6956 m_RegionInfo = vma_new_array(pAllocationCallbacks, RegionInfo, m_RegionCount);
6957 memset(s: m_RegionInfo, c: 0, n: m_RegionCount * sizeof(RegionInfo));
6958 }
6959}
6960
6961void VmaBlockBufferImageGranularity::Destroy(const VkAllocationCallbacks* pAllocationCallbacks)
6962{
6963 if (m_RegionInfo)
6964 {
6965 vma_delete_array(pAllocationCallbacks, ptr: m_RegionInfo, count: m_RegionCount);
6966 m_RegionInfo = VMA_NULL;
6967 }
6968}
6969
6970void VmaBlockBufferImageGranularity::RoundupAllocRequest(VmaSuballocationType allocType,
6971 VkDeviceSize& inOutAllocSize,
6972 VkDeviceSize& inOutAllocAlignment) const
6973{
6974 if (m_BufferImageGranularity > 1 &&
6975 m_BufferImageGranularity <= MAX_LOW_BUFFER_IMAGE_GRANULARITY)
6976 {
6977 if (allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
6978 allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
6979 allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
6980 {
6981 inOutAllocAlignment = VMA_MAX(inOutAllocAlignment, m_BufferImageGranularity);
6982 inOutAllocSize = VmaAlignUp(val: inOutAllocSize, alignment: m_BufferImageGranularity);
6983 }
6984 }
6985}
6986
6987bool VmaBlockBufferImageGranularity::CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
6988 VkDeviceSize allocSize,
6989 VkDeviceSize blockOffset,
6990 VkDeviceSize blockSize,
6991 VmaSuballocationType allocType) const
6992{
6993 if (IsEnabled())
6994 {
6995 uint32_t startPage = GetStartPage(offset: inOutAllocOffset);
6996 if (m_RegionInfo[startPage].allocCount > 0 &&
6997 VmaIsBufferImageGranularityConflict(suballocType1: static_cast<VmaSuballocationType>(m_RegionInfo[startPage].allocType), suballocType2: allocType))
6998 {
6999 inOutAllocOffset = VmaAlignUp(val: inOutAllocOffset, alignment: m_BufferImageGranularity);
7000 if (blockSize < allocSize + inOutAllocOffset - blockOffset)
7001 return true;
7002 ++startPage;
7003 }
7004 uint32_t endPage = GetEndPage(offset: inOutAllocOffset, size: allocSize);
7005 if (endPage != startPage &&
7006 m_RegionInfo[endPage].allocCount > 0 &&
7007 VmaIsBufferImageGranularityConflict(suballocType1: static_cast<VmaSuballocationType>(m_RegionInfo[endPage].allocType), suballocType2: allocType))
7008 {
7009 return true;
7010 }
7011 }
7012 return false;
7013}
7014
7015void VmaBlockBufferImageGranularity::AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size)
7016{
7017 if (IsEnabled())
7018 {
7019 uint32_t startPage = GetStartPage(offset);
7020 AllocPage(page&: m_RegionInfo[startPage], allocType);
7021
7022 uint32_t endPage = GetEndPage(offset, size);
7023 if (startPage != endPage)
7024 AllocPage(page&: m_RegionInfo[endPage], allocType);
7025 }
7026}
7027
7028void VmaBlockBufferImageGranularity::FreePages(VkDeviceSize offset, VkDeviceSize size)
7029{
7030 if (IsEnabled())
7031 {
7032 uint32_t startPage = GetStartPage(offset);
7033 --m_RegionInfo[startPage].allocCount;
7034 if (m_RegionInfo[startPage].allocCount == 0)
7035 m_RegionInfo[startPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
7036 uint32_t endPage = GetEndPage(offset, size);
7037 if (startPage != endPage)
7038 {
7039 --m_RegionInfo[endPage].allocCount;
7040 if (m_RegionInfo[endPage].allocCount == 0)
7041 m_RegionInfo[endPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
7042 }
7043 }
7044}
7045
7046void VmaBlockBufferImageGranularity::Clear()
7047{
7048 if (m_RegionInfo)
7049 memset(s: m_RegionInfo, c: 0, n: m_RegionCount * sizeof(RegionInfo));
7050}
7051
7052VmaBlockBufferImageGranularity::ValidationContext VmaBlockBufferImageGranularity::StartValidation(
7053 const VkAllocationCallbacks* pAllocationCallbacks, bool isVirutal) const
7054{
7055 ValidationContext ctx{ .allocCallbacks: pAllocationCallbacks, VMA_NULL };
7056 if (!isVirutal && IsEnabled())
7057 {
7058 ctx.pageAllocs = vma_new_array(pAllocationCallbacks, uint16_t, m_RegionCount);
7059 memset(s: ctx.pageAllocs, c: 0, n: m_RegionCount * sizeof(uint16_t));
7060 }
7061 return ctx;
7062}
7063
7064bool VmaBlockBufferImageGranularity::Validate(ValidationContext& ctx,
7065 VkDeviceSize offset, VkDeviceSize size) const
7066{
7067 if (IsEnabled())
7068 {
7069 uint32_t start = GetStartPage(offset);
7070 ++ctx.pageAllocs[start];
7071 VMA_VALIDATE(m_RegionInfo[start].allocCount > 0);
7072
7073 uint32_t end = GetEndPage(offset, size);
7074 if (start != end)
7075 {
7076 ++ctx.pageAllocs[end];
7077 VMA_VALIDATE(m_RegionInfo[end].allocCount > 0);
7078 }
7079 }
7080 return true;
7081}
7082
7083bool VmaBlockBufferImageGranularity::FinishValidation(ValidationContext& ctx) const
7084{
7085 // Check proper page structure
7086 if (IsEnabled())
7087 {
7088 VMA_ASSERT(ctx.pageAllocs != VMA_NULL && "Validation context not initialized!");
7089
7090 for (uint32_t page = 0; page < m_RegionCount; ++page)
7091 {
7092 VMA_VALIDATE(ctx.pageAllocs[page] == m_RegionInfo[page].allocCount);
7093 }
7094 vma_delete_array(pAllocationCallbacks: ctx.allocCallbacks, ptr: ctx.pageAllocs, count: m_RegionCount);
7095 ctx.pageAllocs = VMA_NULL;
7096 }
7097 return true;
7098}
7099
7100uint32_t VmaBlockBufferImageGranularity::OffsetToPageIndex(VkDeviceSize offset) const
7101{
7102 return static_cast<uint32_t>(offset >> VMA_BITSCAN_MSB(m_BufferImageGranularity));
7103}
7104
7105void VmaBlockBufferImageGranularity::AllocPage(RegionInfo& page, uint8_t allocType)
7106{
7107 // When current alloc type is free then it can be overridden by new type
7108 if (page.allocCount == 0 || (page.allocCount > 0 && page.allocType == VMA_SUBALLOCATION_TYPE_FREE))
7109 page.allocType = allocType;
7110
7111 ++page.allocCount;
7112}
7113#endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
7114#endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
7115
7116#ifndef _VMA_BLOCK_METADATA_LINEAR
7117/*
7118Allocations and their references in internal data structure look like this:
7119
7120if(m_2ndVectorMode == SECOND_VECTOR_EMPTY):
7121
7122 0 +-------+
7123 | |
7124 | |
7125 | |
7126 +-------+
7127 | Alloc | 1st[m_1stNullItemsBeginCount]
7128 +-------+
7129 | Alloc | 1st[m_1stNullItemsBeginCount + 1]
7130 +-------+
7131 | ... |
7132 +-------+
7133 | Alloc | 1st[1st.size() - 1]
7134 +-------+
7135 | |
7136 | |
7137 | |
7138GetSize() +-------+
7139
7140if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER):
7141
7142 0 +-------+
7143 | Alloc | 2nd[0]
7144 +-------+
7145 | Alloc | 2nd[1]
7146 +-------+
7147 | ... |
7148 +-------+
7149 | Alloc | 2nd[2nd.size() - 1]
7150 +-------+
7151 | |
7152 | |
7153 | |
7154 +-------+
7155 | Alloc | 1st[m_1stNullItemsBeginCount]
7156 +-------+
7157 | Alloc | 1st[m_1stNullItemsBeginCount + 1]
7158 +-------+
7159 | ... |
7160 +-------+
7161 | Alloc | 1st[1st.size() - 1]
7162 +-------+
7163 | |
7164GetSize() +-------+
7165
7166if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK):
7167
7168 0 +-------+
7169 | |
7170 | |
7171 | |
7172 +-------+
7173 | Alloc | 1st[m_1stNullItemsBeginCount]
7174 +-------+
7175 | Alloc | 1st[m_1stNullItemsBeginCount + 1]
7176 +-------+
7177 | ... |
7178 +-------+
7179 | Alloc | 1st[1st.size() - 1]
7180 +-------+
7181 | |
7182 | |
7183 | |
7184 +-------+
7185 | Alloc | 2nd[2nd.size() - 1]
7186 +-------+
7187 | ... |
7188 +-------+
7189 | Alloc | 2nd[1]
7190 +-------+
7191 | Alloc | 2nd[0]
7192GetSize() +-------+
7193
7194*/
7195class VmaBlockMetadata_Linear : public VmaBlockMetadata
7196{
7197 VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_Linear)
7198public:
7199 VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
7200 VkDeviceSize bufferImageGranularity, bool isVirtual);
7201 virtual ~VmaBlockMetadata_Linear() = default;
7202
7203 VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize; }
7204 bool IsEmpty() const override { return GetAllocationCount() == 0; }
7205 VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; }
7206
7207 void Init(VkDeviceSize size) override;
7208 bool Validate() const override;
7209 size_t GetAllocationCount() const override;
7210 size_t GetFreeRegionsCount() const override;
7211
7212 void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
7213 void AddStatistics(VmaStatistics& inoutStats) const override;
7214
7215#if VMA_STATS_STRING_ENABLED
7216 void PrintDetailedMap(class VmaJsonWriter& json) const override;
7217#endif
7218
7219 bool CreateAllocationRequest(
7220 VkDeviceSize allocSize,
7221 VkDeviceSize allocAlignment,
7222 bool upperAddress,
7223 VmaSuballocationType allocType,
7224 uint32_t strategy,
7225 VmaAllocationRequest* pAllocationRequest) override;
7226
7227 VkResult CheckCorruption(const void* pBlockData) override;
7228
7229 void Alloc(
7230 const VmaAllocationRequest& request,
7231 VmaSuballocationType type,
7232 void* userData) override;
7233
7234 void Free(VmaAllocHandle allocHandle) override;
7235 void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
7236 void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
7237 VmaAllocHandle GetAllocationListBegin() const override;
7238 VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
7239 VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
7240 void Clear() override;
7241 void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
7242 void DebugLogAllAllocations() const override;
7243
7244private:
7245 /*
7246 There are two suballocation vectors, used in ping-pong way.
7247 The one with index m_1stVectorIndex is called 1st.
7248 The one with index (m_1stVectorIndex ^ 1) is called 2nd.
7249 2nd can be non-empty only when 1st is not empty.
7250 When 2nd is not empty, m_2ndVectorMode indicates its mode of operation.
7251 */
7252 typedef VmaVector<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> SuballocationVectorType;
7253
7254 enum SECOND_VECTOR_MODE
7255 {
7256 SECOND_VECTOR_EMPTY,
7257 /*
7258 Suballocations in 2nd vector are created later than the ones in 1st, but they
7259 all have smaller offset.
7260 */
7261 SECOND_VECTOR_RING_BUFFER,
7262 /*
7263 Suballocations in 2nd vector are upper side of double stack.
7264 They all have offsets higher than those in 1st vector.
7265 Top of this stack means smaller offsets, but higher indices in this vector.
7266 */
7267 SECOND_VECTOR_DOUBLE_STACK,
7268 };
7269
7270 VkDeviceSize m_SumFreeSize;
7271 SuballocationVectorType m_Suballocations0, m_Suballocations1;
7272 uint32_t m_1stVectorIndex;
7273 SECOND_VECTOR_MODE m_2ndVectorMode;
7274 // Number of items in 1st vector with hAllocation = null at the beginning.
7275 size_t m_1stNullItemsBeginCount;
7276 // Number of other items in 1st vector with hAllocation = null somewhere in the middle.
7277 size_t m_1stNullItemsMiddleCount;
7278 // Number of items in 2nd vector with hAllocation = null.
7279 size_t m_2ndNullItemsCount;
7280
7281 SuballocationVectorType& AccessSuballocations1st() { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
7282 SuballocationVectorType& AccessSuballocations2nd() { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
7283 const SuballocationVectorType& AccessSuballocations1st() const { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
7284 const SuballocationVectorType& AccessSuballocations2nd() const { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
7285
7286 VmaSuballocation& FindSuballocation(VkDeviceSize offset) const;
7287 bool ShouldCompact1st() const;
7288 void CleanupAfterFree();
7289
7290 bool CreateAllocationRequest_LowerAddress(
7291 VkDeviceSize allocSize,
7292 VkDeviceSize allocAlignment,
7293 VmaSuballocationType allocType,
7294 uint32_t strategy,
7295 VmaAllocationRequest* pAllocationRequest);
7296 bool CreateAllocationRequest_UpperAddress(
7297 VkDeviceSize allocSize,
7298 VkDeviceSize allocAlignment,
7299 VmaSuballocationType allocType,
7300 uint32_t strategy,
7301 VmaAllocationRequest* pAllocationRequest);
7302};
7303
7304#ifndef _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
7305VmaBlockMetadata_Linear::VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
7306 VkDeviceSize bufferImageGranularity, bool isVirtual)
7307 : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
7308 m_SumFreeSize(0),
7309 m_Suballocations0(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
7310 m_Suballocations1(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
7311 m_1stVectorIndex(0),
7312 m_2ndVectorMode(SECOND_VECTOR_EMPTY),
7313 m_1stNullItemsBeginCount(0),
7314 m_1stNullItemsMiddleCount(0),
7315 m_2ndNullItemsCount(0) {}
7316
7317void VmaBlockMetadata_Linear::Init(VkDeviceSize size)
7318{
7319 VmaBlockMetadata::Init(size);
7320 m_SumFreeSize = size;
7321}
7322
7323bool VmaBlockMetadata_Linear::Validate() const
7324{
7325 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
7326 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
7327
7328 VMA_VALIDATE(suballocations2nd.empty() == (m_2ndVectorMode == SECOND_VECTOR_EMPTY));
7329 VMA_VALIDATE(!suballocations1st.empty() ||
7330 suballocations2nd.empty() ||
7331 m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER);
7332
7333 if (!suballocations1st.empty())
7334 {
7335 // Null item at the beginning should be accounted into m_1stNullItemsBeginCount.
7336 VMA_VALIDATE(suballocations1st[m_1stNullItemsBeginCount].type != VMA_SUBALLOCATION_TYPE_FREE);
7337 // Null item at the end should be just pop_back().
7338 VMA_VALIDATE(suballocations1st.back().type != VMA_SUBALLOCATION_TYPE_FREE);
7339 }
7340 if (!suballocations2nd.empty())
7341 {
7342 // Null item at the end should be just pop_back().
7343 VMA_VALIDATE(suballocations2nd.back().type != VMA_SUBALLOCATION_TYPE_FREE);
7344 }
7345
7346 VMA_VALIDATE(m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount <= suballocations1st.size());
7347 VMA_VALIDATE(m_2ndNullItemsCount <= suballocations2nd.size());
7348
7349 VkDeviceSize sumUsedSize = 0;
7350 const size_t suballoc1stCount = suballocations1st.size();
7351 const VkDeviceSize debugMargin = GetDebugMargin();
7352 VkDeviceSize offset = 0;
7353
7354 if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
7355 {
7356 const size_t suballoc2ndCount = suballocations2nd.size();
7357 size_t nullItem2ndCount = 0;
7358 for (size_t i = 0; i < suballoc2ndCount; ++i)
7359 {
7360 const VmaSuballocation& suballoc = suballocations2nd[i];
7361 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7362
7363 VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
7364 if (!IsVirtual())
7365 {
7366 VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
7367 }
7368 VMA_VALIDATE(suballoc.offset >= offset);
7369
7370 if (!currFree)
7371 {
7372 if (!IsVirtual())
7373 {
7374 VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
7375 VMA_VALIDATE(alloc->GetSize() == suballoc.size);
7376 }
7377 sumUsedSize += suballoc.size;
7378 }
7379 else
7380 {
7381 ++nullItem2ndCount;
7382 }
7383
7384 offset = suballoc.offset + suballoc.size + debugMargin;
7385 }
7386
7387 VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
7388 }
7389
7390 for (size_t i = 0; i < m_1stNullItemsBeginCount; ++i)
7391 {
7392 const VmaSuballocation& suballoc = suballocations1st[i];
7393 VMA_VALIDATE(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE &&
7394 suballoc.userData == VMA_NULL);
7395 }
7396
7397 size_t nullItem1stCount = m_1stNullItemsBeginCount;
7398
7399 for (size_t i = m_1stNullItemsBeginCount; i < suballoc1stCount; ++i)
7400 {
7401 const VmaSuballocation& suballoc = suballocations1st[i];
7402 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7403
7404 VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
7405 if (!IsVirtual())
7406 {
7407 VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
7408 }
7409 VMA_VALIDATE(suballoc.offset >= offset);
7410 VMA_VALIDATE(i >= m_1stNullItemsBeginCount || currFree);
7411
7412 if (!currFree)
7413 {
7414 if (!IsVirtual())
7415 {
7416 VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
7417 VMA_VALIDATE(alloc->GetSize() == suballoc.size);
7418 }
7419 sumUsedSize += suballoc.size;
7420 }
7421 else
7422 {
7423 ++nullItem1stCount;
7424 }
7425
7426 offset = suballoc.offset + suballoc.size + debugMargin;
7427 }
7428 VMA_VALIDATE(nullItem1stCount == m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount);
7429
7430 if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
7431 {
7432 const size_t suballoc2ndCount = suballocations2nd.size();
7433 size_t nullItem2ndCount = 0;
7434 for (size_t i = suballoc2ndCount; i--; )
7435 {
7436 const VmaSuballocation& suballoc = suballocations2nd[i];
7437 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7438
7439 VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
7440 if (!IsVirtual())
7441 {
7442 VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
7443 }
7444 VMA_VALIDATE(suballoc.offset >= offset);
7445
7446 if (!currFree)
7447 {
7448 if (!IsVirtual())
7449 {
7450 VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
7451 VMA_VALIDATE(alloc->GetSize() == suballoc.size);
7452 }
7453 sumUsedSize += suballoc.size;
7454 }
7455 else
7456 {
7457 ++nullItem2ndCount;
7458 }
7459
7460 offset = suballoc.offset + suballoc.size + debugMargin;
7461 }
7462
7463 VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
7464 }
7465
7466 VMA_VALIDATE(offset <= GetSize());
7467 VMA_VALIDATE(m_SumFreeSize == GetSize() - sumUsedSize);
7468
7469 return true;
7470}
7471
7472size_t VmaBlockMetadata_Linear::GetAllocationCount() const
7473{
7474 return AccessSuballocations1st().size() - m_1stNullItemsBeginCount - m_1stNullItemsMiddleCount +
7475 AccessSuballocations2nd().size() - m_2ndNullItemsCount;
7476}
7477
7478size_t VmaBlockMetadata_Linear::GetFreeRegionsCount() const
7479{
7480 // Function only used for defragmentation, which is disabled for this algorithm
7481 VMA_ASSERT(0);
7482 return SIZE_MAX;
7483}
7484
7485void VmaBlockMetadata_Linear::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
7486{
7487 const VkDeviceSize size = GetSize();
7488 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
7489 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
7490 const size_t suballoc1stCount = suballocations1st.size();
7491 const size_t suballoc2ndCount = suballocations2nd.size();
7492
7493 inoutStats.statistics.blockCount++;
7494 inoutStats.statistics.blockBytes += size;
7495
7496 VkDeviceSize lastOffset = 0;
7497
7498 if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
7499 {
7500 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
7501 size_t nextAlloc2ndIndex = 0;
7502 while (lastOffset < freeSpace2ndTo1stEnd)
7503 {
7504 // Find next non-null allocation or move nextAllocIndex to the end.
7505 while (nextAlloc2ndIndex < suballoc2ndCount &&
7506 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7507 {
7508 ++nextAlloc2ndIndex;
7509 }
7510
7511 // Found non-null allocation.
7512 if (nextAlloc2ndIndex < suballoc2ndCount)
7513 {
7514 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7515
7516 // 1. Process free space before this allocation.
7517 if (lastOffset < suballoc.offset)
7518 {
7519 // There is free space from lastOffset to suballoc.offset.
7520 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
7521 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: unusedRangeSize);
7522 }
7523
7524 // 2. Process this allocation.
7525 // There is allocation with suballoc.offset, suballoc.size.
7526 VmaAddDetailedStatisticsAllocation(inoutStats, size: suballoc.size);
7527
7528 // 3. Prepare for next iteration.
7529 lastOffset = suballoc.offset + suballoc.size;
7530 ++nextAlloc2ndIndex;
7531 }
7532 // We are at the end.
7533 else
7534 {
7535 // There is free space from lastOffset to freeSpace2ndTo1stEnd.
7536 if (lastOffset < freeSpace2ndTo1stEnd)
7537 {
7538 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
7539 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: unusedRangeSize);
7540 }
7541
7542 // End of loop.
7543 lastOffset = freeSpace2ndTo1stEnd;
7544 }
7545 }
7546 }
7547
7548 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
7549 const VkDeviceSize freeSpace1stTo2ndEnd =
7550 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
7551 while (lastOffset < freeSpace1stTo2ndEnd)
7552 {
7553 // Find next non-null allocation or move nextAllocIndex to the end.
7554 while (nextAlloc1stIndex < suballoc1stCount &&
7555 suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
7556 {
7557 ++nextAlloc1stIndex;
7558 }
7559
7560 // Found non-null allocation.
7561 if (nextAlloc1stIndex < suballoc1stCount)
7562 {
7563 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
7564
7565 // 1. Process free space before this allocation.
7566 if (lastOffset < suballoc.offset)
7567 {
7568 // There is free space from lastOffset to suballoc.offset.
7569 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
7570 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: unusedRangeSize);
7571 }
7572
7573 // 2. Process this allocation.
7574 // There is allocation with suballoc.offset, suballoc.size.
7575 VmaAddDetailedStatisticsAllocation(inoutStats, size: suballoc.size);
7576
7577 // 3. Prepare for next iteration.
7578 lastOffset = suballoc.offset + suballoc.size;
7579 ++nextAlloc1stIndex;
7580 }
7581 // We are at the end.
7582 else
7583 {
7584 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
7585 if (lastOffset < freeSpace1stTo2ndEnd)
7586 {
7587 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
7588 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: unusedRangeSize);
7589 }
7590
7591 // End of loop.
7592 lastOffset = freeSpace1stTo2ndEnd;
7593 }
7594 }
7595
7596 if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
7597 {
7598 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
7599 while (lastOffset < size)
7600 {
7601 // Find next non-null allocation or move nextAllocIndex to the end.
7602 while (nextAlloc2ndIndex != SIZE_MAX &&
7603 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7604 {
7605 --nextAlloc2ndIndex;
7606 }
7607
7608 // Found non-null allocation.
7609 if (nextAlloc2ndIndex != SIZE_MAX)
7610 {
7611 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7612
7613 // 1. Process free space before this allocation.
7614 if (lastOffset < suballoc.offset)
7615 {
7616 // There is free space from lastOffset to suballoc.offset.
7617 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
7618 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: unusedRangeSize);
7619 }
7620
7621 // 2. Process this allocation.
7622 // There is allocation with suballoc.offset, suballoc.size.
7623 VmaAddDetailedStatisticsAllocation(inoutStats, size: suballoc.size);
7624
7625 // 3. Prepare for next iteration.
7626 lastOffset = suballoc.offset + suballoc.size;
7627 --nextAlloc2ndIndex;
7628 }
7629 // We are at the end.
7630 else
7631 {
7632 // There is free space from lastOffset to size.
7633 if (lastOffset < size)
7634 {
7635 const VkDeviceSize unusedRangeSize = size - lastOffset;
7636 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: unusedRangeSize);
7637 }
7638
7639 // End of loop.
7640 lastOffset = size;
7641 }
7642 }
7643 }
7644}
7645
7646void VmaBlockMetadata_Linear::AddStatistics(VmaStatistics& inoutStats) const
7647{
7648 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
7649 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
7650 const VkDeviceSize size = GetSize();
7651 const size_t suballoc1stCount = suballocations1st.size();
7652 const size_t suballoc2ndCount = suballocations2nd.size();
7653
7654 inoutStats.blockCount++;
7655 inoutStats.blockBytes += size;
7656 inoutStats.allocationBytes += size - m_SumFreeSize;
7657
7658 VkDeviceSize lastOffset = 0;
7659
7660 if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
7661 {
7662 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
7663 size_t nextAlloc2ndIndex = m_1stNullItemsBeginCount;
7664 while (lastOffset < freeSpace2ndTo1stEnd)
7665 {
7666 // Find next non-null allocation or move nextAlloc2ndIndex to the end.
7667 while (nextAlloc2ndIndex < suballoc2ndCount &&
7668 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7669 {
7670 ++nextAlloc2ndIndex;
7671 }
7672
7673 // Found non-null allocation.
7674 if (nextAlloc2ndIndex < suballoc2ndCount)
7675 {
7676 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7677
7678 // Process this allocation.
7679 // There is allocation with suballoc.offset, suballoc.size.
7680 ++inoutStats.allocationCount;
7681
7682 // Prepare for next iteration.
7683 lastOffset = suballoc.offset + suballoc.size;
7684 ++nextAlloc2ndIndex;
7685 }
7686 // We are at the end.
7687 else
7688 {
7689 // End of loop.
7690 lastOffset = freeSpace2ndTo1stEnd;
7691 }
7692 }
7693 }
7694
7695 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
7696 const VkDeviceSize freeSpace1stTo2ndEnd =
7697 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
7698 while (lastOffset < freeSpace1stTo2ndEnd)
7699 {
7700 // Find next non-null allocation or move nextAllocIndex to the end.
7701 while (nextAlloc1stIndex < suballoc1stCount &&
7702 suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
7703 {
7704 ++nextAlloc1stIndex;
7705 }
7706
7707 // Found non-null allocation.
7708 if (nextAlloc1stIndex < suballoc1stCount)
7709 {
7710 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
7711
7712 // Process this allocation.
7713 // There is allocation with suballoc.offset, suballoc.size.
7714 ++inoutStats.allocationCount;
7715
7716 // Prepare for next iteration.
7717 lastOffset = suballoc.offset + suballoc.size;
7718 ++nextAlloc1stIndex;
7719 }
7720 // We are at the end.
7721 else
7722 {
7723 // End of loop.
7724 lastOffset = freeSpace1stTo2ndEnd;
7725 }
7726 }
7727
7728 if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
7729 {
7730 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
7731 while (lastOffset < size)
7732 {
7733 // Find next non-null allocation or move nextAlloc2ndIndex to the end.
7734 while (nextAlloc2ndIndex != SIZE_MAX &&
7735 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7736 {
7737 --nextAlloc2ndIndex;
7738 }
7739
7740 // Found non-null allocation.
7741 if (nextAlloc2ndIndex != SIZE_MAX)
7742 {
7743 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7744
7745 // Process this allocation.
7746 // There is allocation with suballoc.offset, suballoc.size.
7747 ++inoutStats.allocationCount;
7748
7749 // Prepare for next iteration.
7750 lastOffset = suballoc.offset + suballoc.size;
7751 --nextAlloc2ndIndex;
7752 }
7753 // We are at the end.
7754 else
7755 {
7756 // End of loop.
7757 lastOffset = size;
7758 }
7759 }
7760 }
7761}
7762
7763#if VMA_STATS_STRING_ENABLED
7764void VmaBlockMetadata_Linear::PrintDetailedMap(class VmaJsonWriter& json) const
7765{
7766 const VkDeviceSize size = GetSize();
7767 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
7768 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
7769 const size_t suballoc1stCount = suballocations1st.size();
7770 const size_t suballoc2ndCount = suballocations2nd.size();
7771
7772 // FIRST PASS
7773
7774 size_t unusedRangeCount = 0;
7775 VkDeviceSize usedBytes = 0;
7776
7777 VkDeviceSize lastOffset = 0;
7778
7779 size_t alloc2ndCount = 0;
7780 if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
7781 {
7782 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
7783 size_t nextAlloc2ndIndex = 0;
7784 while (lastOffset < freeSpace2ndTo1stEnd)
7785 {
7786 // Find next non-null allocation or move nextAlloc2ndIndex to the end.
7787 while (nextAlloc2ndIndex < suballoc2ndCount &&
7788 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7789 {
7790 ++nextAlloc2ndIndex;
7791 }
7792
7793 // Found non-null allocation.
7794 if (nextAlloc2ndIndex < suballoc2ndCount)
7795 {
7796 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7797
7798 // 1. Process free space before this allocation.
7799 if (lastOffset < suballoc.offset)
7800 {
7801 // There is free space from lastOffset to suballoc.offset.
7802 ++unusedRangeCount;
7803 }
7804
7805 // 2. Process this allocation.
7806 // There is allocation with suballoc.offset, suballoc.size.
7807 ++alloc2ndCount;
7808 usedBytes += suballoc.size;
7809
7810 // 3. Prepare for next iteration.
7811 lastOffset = suballoc.offset + suballoc.size;
7812 ++nextAlloc2ndIndex;
7813 }
7814 // We are at the end.
7815 else
7816 {
7817 if (lastOffset < freeSpace2ndTo1stEnd)
7818 {
7819 // There is free space from lastOffset to freeSpace2ndTo1stEnd.
7820 ++unusedRangeCount;
7821 }
7822
7823 // End of loop.
7824 lastOffset = freeSpace2ndTo1stEnd;
7825 }
7826 }
7827 }
7828
7829 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
7830 size_t alloc1stCount = 0;
7831 const VkDeviceSize freeSpace1stTo2ndEnd =
7832 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
7833 while (lastOffset < freeSpace1stTo2ndEnd)
7834 {
7835 // Find next non-null allocation or move nextAllocIndex to the end.
7836 while (nextAlloc1stIndex < suballoc1stCount &&
7837 suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
7838 {
7839 ++nextAlloc1stIndex;
7840 }
7841
7842 // Found non-null allocation.
7843 if (nextAlloc1stIndex < suballoc1stCount)
7844 {
7845 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
7846
7847 // 1. Process free space before this allocation.
7848 if (lastOffset < suballoc.offset)
7849 {
7850 // There is free space from lastOffset to suballoc.offset.
7851 ++unusedRangeCount;
7852 }
7853
7854 // 2. Process this allocation.
7855 // There is allocation with suballoc.offset, suballoc.size.
7856 ++alloc1stCount;
7857 usedBytes += suballoc.size;
7858
7859 // 3. Prepare for next iteration.
7860 lastOffset = suballoc.offset + suballoc.size;
7861 ++nextAlloc1stIndex;
7862 }
7863 // We are at the end.
7864 else
7865 {
7866 if (lastOffset < freeSpace1stTo2ndEnd)
7867 {
7868 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
7869 ++unusedRangeCount;
7870 }
7871
7872 // End of loop.
7873 lastOffset = freeSpace1stTo2ndEnd;
7874 }
7875 }
7876
7877 if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
7878 {
7879 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
7880 while (lastOffset < size)
7881 {
7882 // Find next non-null allocation or move nextAlloc2ndIndex to the end.
7883 while (nextAlloc2ndIndex != SIZE_MAX &&
7884 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7885 {
7886 --nextAlloc2ndIndex;
7887 }
7888
7889 // Found non-null allocation.
7890 if (nextAlloc2ndIndex != SIZE_MAX)
7891 {
7892 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7893
7894 // 1. Process free space before this allocation.
7895 if (lastOffset < suballoc.offset)
7896 {
7897 // There is free space from lastOffset to suballoc.offset.
7898 ++unusedRangeCount;
7899 }
7900
7901 // 2. Process this allocation.
7902 // There is allocation with suballoc.offset, suballoc.size.
7903 ++alloc2ndCount;
7904 usedBytes += suballoc.size;
7905
7906 // 3. Prepare for next iteration.
7907 lastOffset = suballoc.offset + suballoc.size;
7908 --nextAlloc2ndIndex;
7909 }
7910 // We are at the end.
7911 else
7912 {
7913 if (lastOffset < size)
7914 {
7915 // There is free space from lastOffset to size.
7916 ++unusedRangeCount;
7917 }
7918
7919 // End of loop.
7920 lastOffset = size;
7921 }
7922 }
7923 }
7924
7925 const VkDeviceSize unusedBytes = size - usedBytes;
7926 PrintDetailedMap_Begin(json, unusedBytes, allocationCount: alloc1stCount + alloc2ndCount, unusedRangeCount);
7927
7928 // SECOND PASS
7929 lastOffset = 0;
7930
7931 if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
7932 {
7933 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
7934 size_t nextAlloc2ndIndex = 0;
7935 while (lastOffset < freeSpace2ndTo1stEnd)
7936 {
7937 // Find next non-null allocation or move nextAlloc2ndIndex to the end.
7938 while (nextAlloc2ndIndex < suballoc2ndCount &&
7939 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7940 {
7941 ++nextAlloc2ndIndex;
7942 }
7943
7944 // Found non-null allocation.
7945 if (nextAlloc2ndIndex < suballoc2ndCount)
7946 {
7947 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7948
7949 // 1. Process free space before this allocation.
7950 if (lastOffset < suballoc.offset)
7951 {
7952 // There is free space from lastOffset to suballoc.offset.
7953 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
7954 PrintDetailedMap_UnusedRange(json, offset: lastOffset, size: unusedRangeSize);
7955 }
7956
7957 // 2. Process this allocation.
7958 // There is allocation with suballoc.offset, suballoc.size.
7959 PrintDetailedMap_Allocation(json, offset: suballoc.offset, size: suballoc.size, userData: suballoc.userData);
7960
7961 // 3. Prepare for next iteration.
7962 lastOffset = suballoc.offset + suballoc.size;
7963 ++nextAlloc2ndIndex;
7964 }
7965 // We are at the end.
7966 else
7967 {
7968 if (lastOffset < freeSpace2ndTo1stEnd)
7969 {
7970 // There is free space from lastOffset to freeSpace2ndTo1stEnd.
7971 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
7972 PrintDetailedMap_UnusedRange(json, offset: lastOffset, size: unusedRangeSize);
7973 }
7974
7975 // End of loop.
7976 lastOffset = freeSpace2ndTo1stEnd;
7977 }
7978 }
7979 }
7980
7981 nextAlloc1stIndex = m_1stNullItemsBeginCount;
7982 while (lastOffset < freeSpace1stTo2ndEnd)
7983 {
7984 // Find next non-null allocation or move nextAllocIndex to the end.
7985 while (nextAlloc1stIndex < suballoc1stCount &&
7986 suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
7987 {
7988 ++nextAlloc1stIndex;
7989 }
7990
7991 // Found non-null allocation.
7992 if (nextAlloc1stIndex < suballoc1stCount)
7993 {
7994 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
7995
7996 // 1. Process free space before this allocation.
7997 if (lastOffset < suballoc.offset)
7998 {
7999 // There is free space from lastOffset to suballoc.offset.
8000 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8001 PrintDetailedMap_UnusedRange(json, offset: lastOffset, size: unusedRangeSize);
8002 }
8003
8004 // 2. Process this allocation.
8005 // There is allocation with suballoc.offset, suballoc.size.
8006 PrintDetailedMap_Allocation(json, offset: suballoc.offset, size: suballoc.size, userData: suballoc.userData);
8007
8008 // 3. Prepare for next iteration.
8009 lastOffset = suballoc.offset + suballoc.size;
8010 ++nextAlloc1stIndex;
8011 }
8012 // We are at the end.
8013 else
8014 {
8015 if (lastOffset < freeSpace1stTo2ndEnd)
8016 {
8017 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
8018 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
8019 PrintDetailedMap_UnusedRange(json, offset: lastOffset, size: unusedRangeSize);
8020 }
8021
8022 // End of loop.
8023 lastOffset = freeSpace1stTo2ndEnd;
8024 }
8025 }
8026
8027 if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8028 {
8029 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
8030 while (lastOffset < size)
8031 {
8032 // Find next non-null allocation or move nextAlloc2ndIndex to the end.
8033 while (nextAlloc2ndIndex != SIZE_MAX &&
8034 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8035 {
8036 --nextAlloc2ndIndex;
8037 }
8038
8039 // Found non-null allocation.
8040 if (nextAlloc2ndIndex != SIZE_MAX)
8041 {
8042 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8043
8044 // 1. Process free space before this allocation.
8045 if (lastOffset < suballoc.offset)
8046 {
8047 // There is free space from lastOffset to suballoc.offset.
8048 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8049 PrintDetailedMap_UnusedRange(json, offset: lastOffset, size: unusedRangeSize);
8050 }
8051
8052 // 2. Process this allocation.
8053 // There is allocation with suballoc.offset, suballoc.size.
8054 PrintDetailedMap_Allocation(json, offset: suballoc.offset, size: suballoc.size, userData: suballoc.userData);
8055
8056 // 3. Prepare for next iteration.
8057 lastOffset = suballoc.offset + suballoc.size;
8058 --nextAlloc2ndIndex;
8059 }
8060 // We are at the end.
8061 else
8062 {
8063 if (lastOffset < size)
8064 {
8065 // There is free space from lastOffset to size.
8066 const VkDeviceSize unusedRangeSize = size - lastOffset;
8067 PrintDetailedMap_UnusedRange(json, offset: lastOffset, size: unusedRangeSize);
8068 }
8069
8070 // End of loop.
8071 lastOffset = size;
8072 }
8073 }
8074 }
8075
8076 PrintDetailedMap_End(json);
8077}
8078#endif // VMA_STATS_STRING_ENABLED
8079
8080bool VmaBlockMetadata_Linear::CreateAllocationRequest(
8081 VkDeviceSize allocSize,
8082 VkDeviceSize allocAlignment,
8083 bool upperAddress,
8084 VmaSuballocationType allocType,
8085 uint32_t strategy,
8086 VmaAllocationRequest* pAllocationRequest)
8087{
8088 VMA_ASSERT(allocSize > 0);
8089 VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
8090 VMA_ASSERT(pAllocationRequest != VMA_NULL);
8091 VMA_HEAVY_ASSERT(Validate());
8092
8093 if(allocSize > GetSize())
8094 return false;
8095
8096 pAllocationRequest->size = allocSize;
8097 return upperAddress ?
8098 CreateAllocationRequest_UpperAddress(
8099 allocSize, allocAlignment, allocType, strategy, pAllocationRequest) :
8100 CreateAllocationRequest_LowerAddress(
8101 allocSize, allocAlignment, allocType, strategy, pAllocationRequest);
8102}
8103
8104VkResult VmaBlockMetadata_Linear::CheckCorruption(const void* pBlockData)
8105{
8106 VMA_ASSERT(!IsVirtual());
8107 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8108 for (size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
8109 {
8110 const VmaSuballocation& suballoc = suballocations1st[i];
8111 if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
8112 {
8113 if (!VmaValidateMagicValue(pData: pBlockData, offset: suballoc.offset + suballoc.size))
8114 {
8115 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
8116 return VK_ERROR_UNKNOWN_COPY;
8117 }
8118 }
8119 }
8120
8121 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8122 for (size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
8123 {
8124 const VmaSuballocation& suballoc = suballocations2nd[i];
8125 if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
8126 {
8127 if (!VmaValidateMagicValue(pData: pBlockData, offset: suballoc.offset + suballoc.size))
8128 {
8129 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
8130 return VK_ERROR_UNKNOWN_COPY;
8131 }
8132 }
8133 }
8134
8135 return VK_SUCCESS;
8136}
8137
8138void VmaBlockMetadata_Linear::Alloc(
8139 const VmaAllocationRequest& request,
8140 VmaSuballocationType type,
8141 void* userData)
8142{
8143 const VkDeviceSize offset = (VkDeviceSize)request.allocHandle - 1;
8144 const VmaSuballocation newSuballoc = { .offset: offset, .size: request.size, .userData: userData, .type: type };
8145
8146 switch (request.type)
8147 {
8148 case VmaAllocationRequestType::UpperAddress:
8149 {
8150 VMA_ASSERT(m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER &&
8151 "CRITICAL ERROR: Trying to use linear allocator as double stack while it was already used as ring buffer.");
8152 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8153 suballocations2nd.push_back(src: newSuballoc);
8154 m_2ndVectorMode = SECOND_VECTOR_DOUBLE_STACK;
8155 }
8156 break;
8157 case VmaAllocationRequestType::EndOf1st:
8158 {
8159 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8160
8161 VMA_ASSERT(suballocations1st.empty() ||
8162 offset >= suballocations1st.back().offset + suballocations1st.back().size);
8163 // Check if it fits before the end of the block.
8164 VMA_ASSERT(offset + request.size <= GetSize());
8165
8166 suballocations1st.push_back(src: newSuballoc);
8167 }
8168 break;
8169 case VmaAllocationRequestType::EndOf2nd:
8170 {
8171 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8172 // New allocation at the end of 2-part ring buffer, so before first allocation from 1st vector.
8173 VMA_ASSERT(!suballocations1st.empty() &&
8174 offset + request.size <= suballocations1st[m_1stNullItemsBeginCount].offset);
8175 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8176
8177 switch (m_2ndVectorMode)
8178 {
8179 case SECOND_VECTOR_EMPTY:
8180 // First allocation from second part ring buffer.
8181 VMA_ASSERT(suballocations2nd.empty());
8182 m_2ndVectorMode = SECOND_VECTOR_RING_BUFFER;
8183 break;
8184 case SECOND_VECTOR_RING_BUFFER:
8185 // 2-part ring buffer is already started.
8186 VMA_ASSERT(!suballocations2nd.empty());
8187 break;
8188 case SECOND_VECTOR_DOUBLE_STACK:
8189 VMA_ASSERT(0 && "CRITICAL ERROR: Trying to use linear allocator as ring buffer while it was already used as double stack.");
8190 break;
8191 default:
8192 VMA_ASSERT(0);
8193 }
8194
8195 suballocations2nd.push_back(src: newSuballoc);
8196 }
8197 break;
8198 default:
8199 VMA_ASSERT(0 && "CRITICAL INTERNAL ERROR.");
8200 }
8201
8202 m_SumFreeSize -= newSuballoc.size;
8203}
8204
8205void VmaBlockMetadata_Linear::Free(VmaAllocHandle allocHandle)
8206{
8207 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8208 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8209 VkDeviceSize offset = (VkDeviceSize)allocHandle - 1;
8210
8211 if (!suballocations1st.empty())
8212 {
8213 // First allocation: Mark it as next empty at the beginning.
8214 VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
8215 if (firstSuballoc.offset == offset)
8216 {
8217 firstSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
8218 firstSuballoc.userData = VMA_NULL;
8219 m_SumFreeSize += firstSuballoc.size;
8220 ++m_1stNullItemsBeginCount;
8221 CleanupAfterFree();
8222 return;
8223 }
8224 }
8225
8226 // Last allocation in 2-part ring buffer or top of upper stack (same logic).
8227 if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ||
8228 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8229 {
8230 VmaSuballocation& lastSuballoc = suballocations2nd.back();
8231 if (lastSuballoc.offset == offset)
8232 {
8233 m_SumFreeSize += lastSuballoc.size;
8234 suballocations2nd.pop_back();
8235 CleanupAfterFree();
8236 return;
8237 }
8238 }
8239 // Last allocation in 1st vector.
8240 else if (m_2ndVectorMode == SECOND_VECTOR_EMPTY)
8241 {
8242 VmaSuballocation& lastSuballoc = suballocations1st.back();
8243 if (lastSuballoc.offset == offset)
8244 {
8245 m_SumFreeSize += lastSuballoc.size;
8246 suballocations1st.pop_back();
8247 CleanupAfterFree();
8248 return;
8249 }
8250 }
8251
8252 VmaSuballocation refSuballoc;
8253 refSuballoc.offset = offset;
8254 // Rest of members stays uninitialized intentionally for better performance.
8255
8256 // Item from the middle of 1st vector.
8257 {
8258 const SuballocationVectorType::iterator it = VmaBinaryFindSorted(
8259 beg: suballocations1st.begin() + m_1stNullItemsBeginCount,
8260 end: suballocations1st.end(),
8261 value: refSuballoc,
8262 cmp: VmaSuballocationOffsetLess());
8263 if (it != suballocations1st.end())
8264 {
8265 it->type = VMA_SUBALLOCATION_TYPE_FREE;
8266 it->userData = VMA_NULL;
8267 ++m_1stNullItemsMiddleCount;
8268 m_SumFreeSize += it->size;
8269 CleanupAfterFree();
8270 return;
8271 }
8272 }
8273
8274 if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
8275 {
8276 // Item from the middle of 2nd vector.
8277 const SuballocationVectorType::iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
8278 VmaBinaryFindSorted(beg: suballocations2nd.begin(), end: suballocations2nd.end(), value: refSuballoc, cmp: VmaSuballocationOffsetLess()) :
8279 VmaBinaryFindSorted(beg: suballocations2nd.begin(), end: suballocations2nd.end(), value: refSuballoc, cmp: VmaSuballocationOffsetGreater());
8280 if (it != suballocations2nd.end())
8281 {
8282 it->type = VMA_SUBALLOCATION_TYPE_FREE;
8283 it->userData = VMA_NULL;
8284 ++m_2ndNullItemsCount;
8285 m_SumFreeSize += it->size;
8286 CleanupAfterFree();
8287 return;
8288 }
8289 }
8290
8291 VMA_ASSERT(0 && "Allocation to free not found in linear allocator!");
8292}
8293
8294void VmaBlockMetadata_Linear::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
8295{
8296 outInfo.offset = (VkDeviceSize)allocHandle - 1;
8297 VmaSuballocation& suballoc = FindSuballocation(offset: outInfo.offset);
8298 outInfo.size = suballoc.size;
8299 outInfo.pUserData = suballoc.userData;
8300}
8301
8302void* VmaBlockMetadata_Linear::GetAllocationUserData(VmaAllocHandle allocHandle) const
8303{
8304 return FindSuballocation(offset: (VkDeviceSize)allocHandle - 1).userData;
8305}
8306
8307VmaAllocHandle VmaBlockMetadata_Linear::GetAllocationListBegin() const
8308{
8309 // Function only used for defragmentation, which is disabled for this algorithm
8310 VMA_ASSERT(0);
8311 return VK_NULL_HANDLE;
8312}
8313
8314VmaAllocHandle VmaBlockMetadata_Linear::GetNextAllocation(VmaAllocHandle prevAlloc) const
8315{
8316 // Function only used for defragmentation, which is disabled for this algorithm
8317 VMA_ASSERT(0);
8318 return VK_NULL_HANDLE;
8319}
8320
8321VkDeviceSize VmaBlockMetadata_Linear::GetNextFreeRegionSize(VmaAllocHandle alloc) const
8322{
8323 // Function only used for defragmentation, which is disabled for this algorithm
8324 VMA_ASSERT(0);
8325 return 0;
8326}
8327
8328void VmaBlockMetadata_Linear::Clear()
8329{
8330 m_SumFreeSize = GetSize();
8331 m_Suballocations0.clear();
8332 m_Suballocations1.clear();
8333 // Leaving m_1stVectorIndex unchanged - it doesn't matter.
8334 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8335 m_1stNullItemsBeginCount = 0;
8336 m_1stNullItemsMiddleCount = 0;
8337 m_2ndNullItemsCount = 0;
8338}
8339
8340void VmaBlockMetadata_Linear::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
8341{
8342 VmaSuballocation& suballoc = FindSuballocation(offset: (VkDeviceSize)allocHandle - 1);
8343 suballoc.userData = userData;
8344}
8345
8346void VmaBlockMetadata_Linear::DebugLogAllAllocations() const
8347{
8348 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8349 for (auto it = suballocations1st.begin() + m_1stNullItemsBeginCount; it != suballocations1st.end(); ++it)
8350 if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
8351 DebugLogAllocation(offset: it->offset, size: it->size, userData: it->userData);
8352
8353 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8354 for (auto it = suballocations2nd.begin(); it != suballocations2nd.end(); ++it)
8355 if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
8356 DebugLogAllocation(offset: it->offset, size: it->size, userData: it->userData);
8357}
8358
8359VmaSuballocation& VmaBlockMetadata_Linear::FindSuballocation(VkDeviceSize offset) const
8360{
8361 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8362 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8363
8364 VmaSuballocation refSuballoc;
8365 refSuballoc.offset = offset;
8366 // Rest of members stays uninitialized intentionally for better performance.
8367
8368 // Item from the 1st vector.
8369 {
8370 SuballocationVectorType::const_iterator it = VmaBinaryFindSorted(
8371 beg: suballocations1st.begin() + m_1stNullItemsBeginCount,
8372 end: suballocations1st.end(),
8373 value: refSuballoc,
8374 cmp: VmaSuballocationOffsetLess());
8375 if (it != suballocations1st.end())
8376 {
8377 return const_cast<VmaSuballocation&>(*it);
8378 }
8379 }
8380
8381 if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
8382 {
8383 // Rest of members stays uninitialized intentionally for better performance.
8384 SuballocationVectorType::const_iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
8385 VmaBinaryFindSorted(beg: suballocations2nd.begin(), end: suballocations2nd.end(), value: refSuballoc, cmp: VmaSuballocationOffsetLess()) :
8386 VmaBinaryFindSorted(beg: suballocations2nd.begin(), end: suballocations2nd.end(), value: refSuballoc, cmp: VmaSuballocationOffsetGreater());
8387 if (it != suballocations2nd.end())
8388 {
8389 return const_cast<VmaSuballocation&>(*it);
8390 }
8391 }
8392
8393 VMA_ASSERT(0 && "Allocation not found in linear allocator!");
8394 return const_cast<VmaSuballocation&>(suballocations1st.back()); // Should never occur.
8395}
8396
8397bool VmaBlockMetadata_Linear::ShouldCompact1st() const
8398{
8399 const size_t nullItemCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
8400 const size_t suballocCount = AccessSuballocations1st().size();
8401 return suballocCount > 32 && nullItemCount * 2 >= (suballocCount - nullItemCount) * 3;
8402}
8403
8404void VmaBlockMetadata_Linear::CleanupAfterFree()
8405{
8406 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8407 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8408
8409 if (IsEmpty())
8410 {
8411 suballocations1st.clear();
8412 suballocations2nd.clear();
8413 m_1stNullItemsBeginCount = 0;
8414 m_1stNullItemsMiddleCount = 0;
8415 m_2ndNullItemsCount = 0;
8416 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8417 }
8418 else
8419 {
8420 const size_t suballoc1stCount = suballocations1st.size();
8421 const size_t nullItem1stCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
8422 VMA_ASSERT(nullItem1stCount <= suballoc1stCount);
8423
8424 // Find more null items at the beginning of 1st vector.
8425 while (m_1stNullItemsBeginCount < suballoc1stCount &&
8426 suballocations1st[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
8427 {
8428 ++m_1stNullItemsBeginCount;
8429 --m_1stNullItemsMiddleCount;
8430 }
8431
8432 // Find more null items at the end of 1st vector.
8433 while (m_1stNullItemsMiddleCount > 0 &&
8434 suballocations1st.back().type == VMA_SUBALLOCATION_TYPE_FREE)
8435 {
8436 --m_1stNullItemsMiddleCount;
8437 suballocations1st.pop_back();
8438 }
8439
8440 // Find more null items at the end of 2nd vector.
8441 while (m_2ndNullItemsCount > 0 &&
8442 suballocations2nd.back().type == VMA_SUBALLOCATION_TYPE_FREE)
8443 {
8444 --m_2ndNullItemsCount;
8445 suballocations2nd.pop_back();
8446 }
8447
8448 // Find more null items at the beginning of 2nd vector.
8449 while (m_2ndNullItemsCount > 0 &&
8450 suballocations2nd[0].type == VMA_SUBALLOCATION_TYPE_FREE)
8451 {
8452 --m_2ndNullItemsCount;
8453 VmaVectorRemove(vec&: suballocations2nd, index: 0);
8454 }
8455
8456 if (ShouldCompact1st())
8457 {
8458 const size_t nonNullItemCount = suballoc1stCount - nullItem1stCount;
8459 size_t srcIndex = m_1stNullItemsBeginCount;
8460 for (size_t dstIndex = 0; dstIndex < nonNullItemCount; ++dstIndex)
8461 {
8462 while (suballocations1st[srcIndex].type == VMA_SUBALLOCATION_TYPE_FREE)
8463 {
8464 ++srcIndex;
8465 }
8466 if (dstIndex != srcIndex)
8467 {
8468 suballocations1st[dstIndex] = suballocations1st[srcIndex];
8469 }
8470 ++srcIndex;
8471 }
8472 suballocations1st.resize(newCount: nonNullItemCount);
8473 m_1stNullItemsBeginCount = 0;
8474 m_1stNullItemsMiddleCount = 0;
8475 }
8476
8477 // 2nd vector became empty.
8478 if (suballocations2nd.empty())
8479 {
8480 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8481 }
8482
8483 // 1st vector became empty.
8484 if (suballocations1st.size() - m_1stNullItemsBeginCount == 0)
8485 {
8486 suballocations1st.clear();
8487 m_1stNullItemsBeginCount = 0;
8488
8489 if (!suballocations2nd.empty() && m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
8490 {
8491 // Swap 1st with 2nd. Now 2nd is empty.
8492 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8493 m_1stNullItemsMiddleCount = m_2ndNullItemsCount;
8494 while (m_1stNullItemsBeginCount < suballocations2nd.size() &&
8495 suballocations2nd[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
8496 {
8497 ++m_1stNullItemsBeginCount;
8498 --m_1stNullItemsMiddleCount;
8499 }
8500 m_2ndNullItemsCount = 0;
8501 m_1stVectorIndex ^= 1;
8502 }
8503 }
8504 }
8505
8506 VMA_HEAVY_ASSERT(Validate());
8507}
8508
8509bool VmaBlockMetadata_Linear::CreateAllocationRequest_LowerAddress(
8510 VkDeviceSize allocSize,
8511 VkDeviceSize allocAlignment,
8512 VmaSuballocationType allocType,
8513 uint32_t strategy,
8514 VmaAllocationRequest* pAllocationRequest)
8515{
8516 const VkDeviceSize blockSize = GetSize();
8517 const VkDeviceSize debugMargin = GetDebugMargin();
8518 const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
8519 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8520 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8521
8522 if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8523 {
8524 // Try to allocate at the end of 1st vector.
8525
8526 VkDeviceSize resultBaseOffset = 0;
8527 if (!suballocations1st.empty())
8528 {
8529 const VmaSuballocation& lastSuballoc = suballocations1st.back();
8530 resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
8531 }
8532
8533 // Start from offset equal to beginning of free space.
8534 VkDeviceSize resultOffset = resultBaseOffset;
8535
8536 // Apply alignment.
8537 resultOffset = VmaAlignUp(val: resultOffset, alignment: allocAlignment);
8538
8539 // Check previous suballocations for BufferImageGranularity conflicts.
8540 // Make bigger alignment if necessary.
8541 if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations1st.empty())
8542 {
8543 bool bufferImageGranularityConflict = false;
8544 for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
8545 {
8546 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
8547 if (VmaBlocksOnSamePage(resourceAOffset: prevSuballoc.offset, resourceASize: prevSuballoc.size, resourceBOffset: resultOffset, pageSize: bufferImageGranularity))
8548 {
8549 if (VmaIsBufferImageGranularityConflict(suballocType1: prevSuballoc.type, suballocType2: allocType))
8550 {
8551 bufferImageGranularityConflict = true;
8552 break;
8553 }
8554 }
8555 else
8556 // Already on previous page.
8557 break;
8558 }
8559 if (bufferImageGranularityConflict)
8560 {
8561 resultOffset = VmaAlignUp(val: resultOffset, alignment: bufferImageGranularity);
8562 }
8563 }
8564
8565 const VkDeviceSize freeSpaceEnd = m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ?
8566 suballocations2nd.back().offset : blockSize;
8567
8568 // There is enough free space at the end after alignment.
8569 if (resultOffset + allocSize + debugMargin <= freeSpaceEnd)
8570 {
8571 // Check next suballocations for BufferImageGranularity conflicts.
8572 // If conflict exists, allocation cannot be made here.
8573 if ((allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity) && m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8574 {
8575 for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
8576 {
8577 const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
8578 if (VmaBlocksOnSamePage(resourceAOffset: resultOffset, resourceASize: allocSize, resourceBOffset: nextSuballoc.offset, pageSize: bufferImageGranularity))
8579 {
8580 if (VmaIsBufferImageGranularityConflict(suballocType1: allocType, suballocType2: nextSuballoc.type))
8581 {
8582 return false;
8583 }
8584 }
8585 else
8586 {
8587 // Already on previous page.
8588 break;
8589 }
8590 }
8591 }
8592
8593 // All tests passed: Success.
8594 pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
8595 // pAllocationRequest->item, customData unused.
8596 pAllocationRequest->type = VmaAllocationRequestType::EndOf1st;
8597 return true;
8598 }
8599 }
8600
8601 // Wrap-around to end of 2nd vector. Try to allocate there, watching for the
8602 // beginning of 1st vector as the end of free space.
8603 if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
8604 {
8605 VMA_ASSERT(!suballocations1st.empty());
8606
8607 VkDeviceSize resultBaseOffset = 0;
8608 if (!suballocations2nd.empty())
8609 {
8610 const VmaSuballocation& lastSuballoc = suballocations2nd.back();
8611 resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
8612 }
8613
8614 // Start from offset equal to beginning of free space.
8615 VkDeviceSize resultOffset = resultBaseOffset;
8616
8617 // Apply alignment.
8618 resultOffset = VmaAlignUp(val: resultOffset, alignment: allocAlignment);
8619
8620 // Check previous suballocations for BufferImageGranularity conflicts.
8621 // Make bigger alignment if necessary.
8622 if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
8623 {
8624 bool bufferImageGranularityConflict = false;
8625 for (size_t prevSuballocIndex = suballocations2nd.size(); prevSuballocIndex--; )
8626 {
8627 const VmaSuballocation& prevSuballoc = suballocations2nd[prevSuballocIndex];
8628 if (VmaBlocksOnSamePage(resourceAOffset: prevSuballoc.offset, resourceASize: prevSuballoc.size, resourceBOffset: resultOffset, pageSize: bufferImageGranularity))
8629 {
8630 if (VmaIsBufferImageGranularityConflict(suballocType1: prevSuballoc.type, suballocType2: allocType))
8631 {
8632 bufferImageGranularityConflict = true;
8633 break;
8634 }
8635 }
8636 else
8637 // Already on previous page.
8638 break;
8639 }
8640 if (bufferImageGranularityConflict)
8641 {
8642 resultOffset = VmaAlignUp(val: resultOffset, alignment: bufferImageGranularity);
8643 }
8644 }
8645
8646 size_t index1st = m_1stNullItemsBeginCount;
8647
8648 // There is enough free space at the end after alignment.
8649 if ((index1st == suballocations1st.size() && resultOffset + allocSize + debugMargin <= blockSize) ||
8650 (index1st < suballocations1st.size() && resultOffset + allocSize + debugMargin <= suballocations1st[index1st].offset))
8651 {
8652 // Check next suballocations for BufferImageGranularity conflicts.
8653 // If conflict exists, allocation cannot be made here.
8654 if (allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
8655 {
8656 for (size_t nextSuballocIndex = index1st;
8657 nextSuballocIndex < suballocations1st.size();
8658 nextSuballocIndex++)
8659 {
8660 const VmaSuballocation& nextSuballoc = suballocations1st[nextSuballocIndex];
8661 if (VmaBlocksOnSamePage(resourceAOffset: resultOffset, resourceASize: allocSize, resourceBOffset: nextSuballoc.offset, pageSize: bufferImageGranularity))
8662 {
8663 if (VmaIsBufferImageGranularityConflict(suballocType1: allocType, suballocType2: nextSuballoc.type))
8664 {
8665 return false;
8666 }
8667 }
8668 else
8669 {
8670 // Already on next page.
8671 break;
8672 }
8673 }
8674 }
8675
8676 // All tests passed: Success.
8677 pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
8678 pAllocationRequest->type = VmaAllocationRequestType::EndOf2nd;
8679 // pAllocationRequest->item, customData unused.
8680 return true;
8681 }
8682 }
8683
8684 return false;
8685}
8686
8687bool VmaBlockMetadata_Linear::CreateAllocationRequest_UpperAddress(
8688 VkDeviceSize allocSize,
8689 VkDeviceSize allocAlignment,
8690 VmaSuballocationType allocType,
8691 uint32_t strategy,
8692 VmaAllocationRequest* pAllocationRequest)
8693{
8694 const VkDeviceSize blockSize = GetSize();
8695 const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
8696 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8697 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8698
8699 if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
8700 {
8701 VMA_ASSERT(0 && "Trying to use pool with linear algorithm as double stack, while it is already being used as ring buffer.");
8702 return false;
8703 }
8704
8705 // Try to allocate before 2nd.back(), or end of block if 2nd.empty().
8706 if (allocSize > blockSize)
8707 {
8708 return false;
8709 }
8710 VkDeviceSize resultBaseOffset = blockSize - allocSize;
8711 if (!suballocations2nd.empty())
8712 {
8713 const VmaSuballocation& lastSuballoc = suballocations2nd.back();
8714 resultBaseOffset = lastSuballoc.offset - allocSize;
8715 if (allocSize > lastSuballoc.offset)
8716 {
8717 return false;
8718 }
8719 }
8720
8721 // Start from offset equal to end of free space.
8722 VkDeviceSize resultOffset = resultBaseOffset;
8723
8724 const VkDeviceSize debugMargin = GetDebugMargin();
8725
8726 // Apply debugMargin at the end.
8727 if (debugMargin > 0)
8728 {
8729 if (resultOffset < debugMargin)
8730 {
8731 return false;
8732 }
8733 resultOffset -= debugMargin;
8734 }
8735
8736 // Apply alignment.
8737 resultOffset = VmaAlignDown(val: resultOffset, alignment: allocAlignment);
8738
8739 // Check next suballocations from 2nd for BufferImageGranularity conflicts.
8740 // Make bigger alignment if necessary.
8741 if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
8742 {
8743 bool bufferImageGranularityConflict = false;
8744 for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
8745 {
8746 const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
8747 if (VmaBlocksOnSamePage(resourceAOffset: resultOffset, resourceASize: allocSize, resourceBOffset: nextSuballoc.offset, pageSize: bufferImageGranularity))
8748 {
8749 if (VmaIsBufferImageGranularityConflict(suballocType1: nextSuballoc.type, suballocType2: allocType))
8750 {
8751 bufferImageGranularityConflict = true;
8752 break;
8753 }
8754 }
8755 else
8756 // Already on previous page.
8757 break;
8758 }
8759 if (bufferImageGranularityConflict)
8760 {
8761 resultOffset = VmaAlignDown(val: resultOffset, alignment: bufferImageGranularity);
8762 }
8763 }
8764
8765 // There is enough free space.
8766 const VkDeviceSize endOf1st = !suballocations1st.empty() ?
8767 suballocations1st.back().offset + suballocations1st.back().size :
8768 0;
8769 if (endOf1st + debugMargin <= resultOffset)
8770 {
8771 // Check previous suballocations for BufferImageGranularity conflicts.
8772 // If conflict exists, allocation cannot be made here.
8773 if (bufferImageGranularity > 1)
8774 {
8775 for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
8776 {
8777 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
8778 if (VmaBlocksOnSamePage(resourceAOffset: prevSuballoc.offset, resourceASize: prevSuballoc.size, resourceBOffset: resultOffset, pageSize: bufferImageGranularity))
8779 {
8780 if (VmaIsBufferImageGranularityConflict(suballocType1: allocType, suballocType2: prevSuballoc.type))
8781 {
8782 return false;
8783 }
8784 }
8785 else
8786 {
8787 // Already on next page.
8788 break;
8789 }
8790 }
8791 }
8792
8793 // All tests passed: Success.
8794 pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
8795 // pAllocationRequest->item unused.
8796 pAllocationRequest->type = VmaAllocationRequestType::UpperAddress;
8797 return true;
8798 }
8799
8800 return false;
8801}
8802#endif // _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
8803#endif // _VMA_BLOCK_METADATA_LINEAR
8804
8805#ifndef _VMA_BLOCK_METADATA_TLSF
8806// To not search current larger region if first allocation won't succeed and skip to smaller range
8807// use with VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT as strategy in CreateAllocationRequest().
8808// When fragmentation and reusal of previous blocks doesn't matter then use with
8809// VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT for fastest alloc time possible.
8810class VmaBlockMetadata_TLSF : public VmaBlockMetadata
8811{
8812 VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_TLSF)
8813public:
8814 VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
8815 VkDeviceSize bufferImageGranularity, bool isVirtual);
8816 virtual ~VmaBlockMetadata_TLSF();
8817
8818 size_t GetAllocationCount() const override { return m_AllocCount; }
8819 size_t GetFreeRegionsCount() const override { return m_BlocksFreeCount + 1; }
8820 VkDeviceSize GetSumFreeSize() const override { return m_BlocksFreeSize + m_NullBlock->size; }
8821 bool IsEmpty() const override { return m_NullBlock->offset == 0; }
8822 VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return ((Block*)allocHandle)->offset; }
8823
8824 void Init(VkDeviceSize size) override;
8825 bool Validate() const override;
8826
8827 void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
8828 void AddStatistics(VmaStatistics& inoutStats) const override;
8829
8830#if VMA_STATS_STRING_ENABLED
8831 void PrintDetailedMap(class VmaJsonWriter& json) const override;
8832#endif
8833
8834 bool CreateAllocationRequest(
8835 VkDeviceSize allocSize,
8836 VkDeviceSize allocAlignment,
8837 bool upperAddress,
8838 VmaSuballocationType allocType,
8839 uint32_t strategy,
8840 VmaAllocationRequest* pAllocationRequest) override;
8841
8842 VkResult CheckCorruption(const void* pBlockData) override;
8843 void Alloc(
8844 const VmaAllocationRequest& request,
8845 VmaSuballocationType type,
8846 void* userData) override;
8847
8848 void Free(VmaAllocHandle allocHandle) override;
8849 void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
8850 void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
8851 VmaAllocHandle GetAllocationListBegin() const override;
8852 VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
8853 VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
8854 void Clear() override;
8855 void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
8856 void DebugLogAllAllocations() const override;
8857
8858private:
8859 // According to original paper it should be preferable 4 or 5:
8860 // M. Masmano, I. Ripoll, A. Crespo, and J. Real "TLSF: a New Dynamic Memory Allocator for Real-Time Systems"
8861 // http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf
8862 static const uint8_t SECOND_LEVEL_INDEX = 5;
8863 static const uint16_t SMALL_BUFFER_SIZE = 256;
8864 static const uint32_t INITIAL_BLOCK_ALLOC_COUNT = 16;
8865 static const uint8_t MEMORY_CLASS_SHIFT = 7;
8866 static const uint8_t MAX_MEMORY_CLASSES = 65 - MEMORY_CLASS_SHIFT;
8867
8868 class Block
8869 {
8870 public:
8871 VkDeviceSize offset;
8872 VkDeviceSize size;
8873 Block* prevPhysical;
8874 Block* nextPhysical;
8875
8876 void MarkFree() { prevFree = VMA_NULL; }
8877 void MarkTaken() { prevFree = this; }
8878 bool IsFree() const { return prevFree != this; }
8879 void*& UserData() { VMA_HEAVY_ASSERT(!IsFree()); return userData; }
8880 Block*& PrevFree() { return prevFree; }
8881 Block*& NextFree() { VMA_HEAVY_ASSERT(IsFree()); return nextFree; }
8882
8883 private:
8884 Block* prevFree; // Address of the same block here indicates that block is taken
8885 union
8886 {
8887 Block* nextFree;
8888 void* userData;
8889 };
8890 };
8891
8892 size_t m_AllocCount;
8893 // Total number of free blocks besides null block
8894 size_t m_BlocksFreeCount;
8895 // Total size of free blocks excluding null block
8896 VkDeviceSize m_BlocksFreeSize;
8897 uint32_t m_IsFreeBitmap;
8898 uint8_t m_MemoryClasses;
8899 uint32_t m_InnerIsFreeBitmap[MAX_MEMORY_CLASSES];
8900 uint32_t m_ListsCount;
8901 /*
8902 * 0: 0-3 lists for small buffers
8903 * 1+: 0-(2^SLI-1) lists for normal buffers
8904 */
8905 Block** m_FreeList;
8906 VmaPoolAllocator<Block> m_BlockAllocator;
8907 Block* m_NullBlock;
8908 VmaBlockBufferImageGranularity m_GranularityHandler;
8909
8910 uint8_t SizeToMemoryClass(VkDeviceSize size) const;
8911 uint16_t SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const;
8912 uint32_t GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const;
8913 uint32_t GetListIndex(VkDeviceSize size) const;
8914
8915 void RemoveFreeBlock(Block* block);
8916 void InsertFreeBlock(Block* block);
8917 void MergeBlock(Block* block, Block* prev);
8918
8919 Block* FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const;
8920 bool CheckBlock(
8921 Block& block,
8922 uint32_t listIndex,
8923 VkDeviceSize allocSize,
8924 VkDeviceSize allocAlignment,
8925 VmaSuballocationType allocType,
8926 VmaAllocationRequest* pAllocationRequest);
8927};
8928
8929#ifndef _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
8930VmaBlockMetadata_TLSF::VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
8931 VkDeviceSize bufferImageGranularity, bool isVirtual)
8932 : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
8933 m_AllocCount(0),
8934 m_BlocksFreeCount(0),
8935 m_BlocksFreeSize(0),
8936 m_IsFreeBitmap(0),
8937 m_MemoryClasses(0),
8938 m_ListsCount(0),
8939 m_FreeList(VMA_NULL),
8940 m_BlockAllocator(pAllocationCallbacks, INITIAL_BLOCK_ALLOC_COUNT),
8941 m_NullBlock(VMA_NULL),
8942 m_GranularityHandler(bufferImageGranularity) {}
8943
8944VmaBlockMetadata_TLSF::~VmaBlockMetadata_TLSF()
8945{
8946 if (m_FreeList)
8947 vma_delete_array(pAllocationCallbacks: GetAllocationCallbacks(), ptr: m_FreeList, count: m_ListsCount);
8948 m_GranularityHandler.Destroy(pAllocationCallbacks: GetAllocationCallbacks());
8949}
8950
8951void VmaBlockMetadata_TLSF::Init(VkDeviceSize size)
8952{
8953 VmaBlockMetadata::Init(size);
8954
8955 if (!IsVirtual())
8956 m_GranularityHandler.Init(pAllocationCallbacks: GetAllocationCallbacks(), size);
8957
8958 m_NullBlock = m_BlockAllocator.Alloc();
8959 m_NullBlock->size = size;
8960 m_NullBlock->offset = 0;
8961 m_NullBlock->prevPhysical = VMA_NULL;
8962 m_NullBlock->nextPhysical = VMA_NULL;
8963 m_NullBlock->MarkFree();
8964 m_NullBlock->NextFree() = VMA_NULL;
8965 m_NullBlock->PrevFree() = VMA_NULL;
8966 uint8_t memoryClass = SizeToMemoryClass(size);
8967 uint16_t sli = SizeToSecondIndex(size, memoryClass);
8968 m_ListsCount = (memoryClass == 0 ? 0 : (memoryClass - 1) * (1UL << SECOND_LEVEL_INDEX) + sli) + 1;
8969 if (IsVirtual())
8970 m_ListsCount += 1UL << SECOND_LEVEL_INDEX;
8971 else
8972 m_ListsCount += 4;
8973
8974 m_MemoryClasses = memoryClass + uint8_t(2);
8975 memset(s: m_InnerIsFreeBitmap, c: 0, n: MAX_MEMORY_CLASSES * sizeof(uint32_t));
8976
8977 m_FreeList = vma_new_array(GetAllocationCallbacks(), Block*, m_ListsCount);
8978 memset(s: m_FreeList, c: 0, n: m_ListsCount * sizeof(Block*));
8979}
8980
8981bool VmaBlockMetadata_TLSF::Validate() const
8982{
8983 VMA_VALIDATE(GetSumFreeSize() <= GetSize());
8984
8985 VkDeviceSize calculatedSize = m_NullBlock->size;
8986 VkDeviceSize calculatedFreeSize = m_NullBlock->size;
8987 size_t allocCount = 0;
8988 size_t freeCount = 0;
8989
8990 // Check integrity of free lists
8991 for (uint32_t list = 0; list < m_ListsCount; ++list)
8992 {
8993 Block* block = m_FreeList[list];
8994 if (block != VMA_NULL)
8995 {
8996 VMA_VALIDATE(block->IsFree());
8997 VMA_VALIDATE(block->PrevFree() == VMA_NULL);
8998 while (block->NextFree())
8999 {
9000 VMA_VALIDATE(block->NextFree()->IsFree());
9001 VMA_VALIDATE(block->NextFree()->PrevFree() == block);
9002 block = block->NextFree();
9003 }
9004 }
9005 }
9006
9007 VkDeviceSize nextOffset = m_NullBlock->offset;
9008 auto validateCtx = m_GranularityHandler.StartValidation(pAllocationCallbacks: GetAllocationCallbacks(), isVirutal: IsVirtual());
9009
9010 VMA_VALIDATE(m_NullBlock->nextPhysical == VMA_NULL);
9011 if (m_NullBlock->prevPhysical)
9012 {
9013 VMA_VALIDATE(m_NullBlock->prevPhysical->nextPhysical == m_NullBlock);
9014 }
9015 // Check all blocks
9016 for (Block* prev = m_NullBlock->prevPhysical; prev != VMA_NULL; prev = prev->prevPhysical)
9017 {
9018 VMA_VALIDATE(prev->offset + prev->size == nextOffset);
9019 nextOffset = prev->offset;
9020 calculatedSize += prev->size;
9021
9022 uint32_t listIndex = GetListIndex(size: prev->size);
9023 if (prev->IsFree())
9024 {
9025 ++freeCount;
9026 // Check if free block belongs to free list
9027 Block* freeBlock = m_FreeList[listIndex];
9028 VMA_VALIDATE(freeBlock != VMA_NULL);
9029
9030 bool found = false;
9031 do
9032 {
9033 if (freeBlock == prev)
9034 found = true;
9035
9036 freeBlock = freeBlock->NextFree();
9037 } while (!found && freeBlock != VMA_NULL);
9038
9039 VMA_VALIDATE(found);
9040 calculatedFreeSize += prev->size;
9041 }
9042 else
9043 {
9044 ++allocCount;
9045 // Check if taken block is not on a free list
9046 Block* freeBlock = m_FreeList[listIndex];
9047 while (freeBlock)
9048 {
9049 VMA_VALIDATE(freeBlock != prev);
9050 freeBlock = freeBlock->NextFree();
9051 }
9052
9053 if (!IsVirtual())
9054 {
9055 VMA_VALIDATE(m_GranularityHandler.Validate(validateCtx, prev->offset, prev->size));
9056 }
9057 }
9058
9059 if (prev->prevPhysical)
9060 {
9061 VMA_VALIDATE(prev->prevPhysical->nextPhysical == prev);
9062 }
9063 }
9064
9065 if (!IsVirtual())
9066 {
9067 VMA_VALIDATE(m_GranularityHandler.FinishValidation(validateCtx));
9068 }
9069
9070 VMA_VALIDATE(nextOffset == 0);
9071 VMA_VALIDATE(calculatedSize == GetSize());
9072 VMA_VALIDATE(calculatedFreeSize == GetSumFreeSize());
9073 VMA_VALIDATE(allocCount == m_AllocCount);
9074 VMA_VALIDATE(freeCount == m_BlocksFreeCount);
9075
9076 return true;
9077}
9078
9079void VmaBlockMetadata_TLSF::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
9080{
9081 inoutStats.statistics.blockCount++;
9082 inoutStats.statistics.blockBytes += GetSize();
9083 if (m_NullBlock->size > 0)
9084 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: m_NullBlock->size);
9085
9086 for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
9087 {
9088 if (block->IsFree())
9089 VmaAddDetailedStatisticsUnusedRange(inoutStats, size: block->size);
9090 else
9091 VmaAddDetailedStatisticsAllocation(inoutStats, size: block->size);
9092 }
9093}
9094
9095void VmaBlockMetadata_TLSF::AddStatistics(VmaStatistics& inoutStats) const
9096{
9097 inoutStats.blockCount++;
9098 inoutStats.allocationCount += (uint32_t)m_AllocCount;
9099 inoutStats.blockBytes += GetSize();
9100 inoutStats.allocationBytes += GetSize() - GetSumFreeSize();
9101}
9102
9103#if VMA_STATS_STRING_ENABLED
9104void VmaBlockMetadata_TLSF::PrintDetailedMap(class VmaJsonWriter& json) const
9105{
9106 size_t blockCount = m_AllocCount + m_BlocksFreeCount;
9107 VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
9108 VmaVector<Block*, VmaStlAllocator<Block*>> blockList(blockCount, allocator);
9109
9110 size_t i = blockCount;
9111 for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
9112 {
9113 blockList[--i] = block;
9114 }
9115 VMA_ASSERT(i == 0);
9116
9117 VmaDetailedStatistics stats;
9118 VmaClearDetailedStatistics(outStats&: stats);
9119 AddDetailedStatistics(inoutStats&: stats);
9120
9121 PrintDetailedMap_Begin(json,
9122 unusedBytes: stats.statistics.blockBytes - stats.statistics.allocationBytes,
9123 allocationCount: stats.statistics.allocationCount,
9124 unusedRangeCount: stats.unusedRangeCount);
9125
9126 for (; i < blockCount; ++i)
9127 {
9128 Block* block = blockList[i];
9129 if (block->IsFree())
9130 PrintDetailedMap_UnusedRange(json, offset: block->offset, size: block->size);
9131 else
9132 PrintDetailedMap_Allocation(json, offset: block->offset, size: block->size, userData: block->UserData());
9133 }
9134 if (m_NullBlock->size > 0)
9135 PrintDetailedMap_UnusedRange(json, offset: m_NullBlock->offset, size: m_NullBlock->size);
9136
9137 PrintDetailedMap_End(json);
9138}
9139#endif
9140
9141bool VmaBlockMetadata_TLSF::CreateAllocationRequest(
9142 VkDeviceSize allocSize,
9143 VkDeviceSize allocAlignment,
9144 bool upperAddress,
9145 VmaSuballocationType allocType,
9146 uint32_t strategy,
9147 VmaAllocationRequest* pAllocationRequest)
9148{
9149 VMA_ASSERT(allocSize > 0 && "Cannot allocate empty block!");
9150 VMA_ASSERT(!upperAddress && "VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
9151
9152 // For small granularity round up
9153 if (!IsVirtual())
9154 m_GranularityHandler.RoundupAllocRequest(allocType, inOutAllocSize&: allocSize, inOutAllocAlignment&: allocAlignment);
9155
9156 allocSize += GetDebugMargin();
9157 // Quick check for too small pool
9158 if (allocSize > GetSumFreeSize())
9159 return false;
9160
9161 // If no free blocks in pool then check only null block
9162 if (m_BlocksFreeCount == 0)
9163 return CheckBlock(block&: *m_NullBlock, listIndex: m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest);
9164
9165 // Round up to the next block
9166 VkDeviceSize sizeForNextList = allocSize;
9167 VkDeviceSize smallSizeStep = VkDeviceSize(SMALL_BUFFER_SIZE / (IsVirtual() ? 1 << SECOND_LEVEL_INDEX : 4));
9168 if (allocSize > SMALL_BUFFER_SIZE)
9169 {
9170 sizeForNextList += (1ULL << (VMA_BITSCAN_MSB(allocSize) - SECOND_LEVEL_INDEX));
9171 }
9172 else if (allocSize > SMALL_BUFFER_SIZE - smallSizeStep)
9173 sizeForNextList = SMALL_BUFFER_SIZE + 1;
9174 else
9175 sizeForNextList += smallSizeStep;
9176
9177 uint32_t nextListIndex = m_ListsCount;
9178 uint32_t prevListIndex = m_ListsCount;
9179 Block* nextListBlock = VMA_NULL;
9180 Block* prevListBlock = VMA_NULL;
9181
9182 // Check blocks according to strategies
9183 if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT)
9184 {
9185 // Quick check for larger block first
9186 nextListBlock = FindFreeBlock(size: sizeForNextList, listIndex&: nextListIndex);
9187 if (nextListBlock != VMA_NULL && CheckBlock(block&: *nextListBlock, listIndex: nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9188 return true;
9189
9190 // If not fitted then null block
9191 if (CheckBlock(block&: *m_NullBlock, listIndex: m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
9192 return true;
9193
9194 // Null block failed, search larger bucket
9195 while (nextListBlock)
9196 {
9197 if (CheckBlock(block&: *nextListBlock, listIndex: nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9198 return true;
9199 nextListBlock = nextListBlock->NextFree();
9200 }
9201
9202 // Failed again, check best fit bucket
9203 prevListBlock = FindFreeBlock(size: allocSize, listIndex&: prevListIndex);
9204 while (prevListBlock)
9205 {
9206 if (CheckBlock(block&: *prevListBlock, listIndex: prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9207 return true;
9208 prevListBlock = prevListBlock->NextFree();
9209 }
9210 }
9211 else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT)
9212 {
9213 // Check best fit bucket
9214 prevListBlock = FindFreeBlock(size: allocSize, listIndex&: prevListIndex);
9215 while (prevListBlock)
9216 {
9217 if (CheckBlock(block&: *prevListBlock, listIndex: prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9218 return true;
9219 prevListBlock = prevListBlock->NextFree();
9220 }
9221
9222 // If failed check null block
9223 if (CheckBlock(block&: *m_NullBlock, listIndex: m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
9224 return true;
9225
9226 // Check larger bucket
9227 nextListBlock = FindFreeBlock(size: sizeForNextList, listIndex&: nextListIndex);
9228 while (nextListBlock)
9229 {
9230 if (CheckBlock(block&: *nextListBlock, listIndex: nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9231 return true;
9232 nextListBlock = nextListBlock->NextFree();
9233 }
9234 }
9235 else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT )
9236 {
9237 // Perform search from the start
9238 VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
9239 VmaVector<Block*, VmaStlAllocator<Block*>> blockList(m_BlocksFreeCount, allocator);
9240
9241 size_t i = m_BlocksFreeCount;
9242 for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
9243 {
9244 if (block->IsFree() && block->size >= allocSize)
9245 blockList[--i] = block;
9246 }
9247
9248 for (; i < m_BlocksFreeCount; ++i)
9249 {
9250 Block& block = *blockList[i];
9251 if (CheckBlock(block, listIndex: GetListIndex(size: block.size), allocSize, allocAlignment, allocType, pAllocationRequest))
9252 return true;
9253 }
9254
9255 // If failed check null block
9256 if (CheckBlock(block&: *m_NullBlock, listIndex: m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
9257 return true;
9258
9259 // Whole range searched, no more memory
9260 return false;
9261 }
9262 else
9263 {
9264 // Check larger bucket
9265 nextListBlock = FindFreeBlock(size: sizeForNextList, listIndex&: nextListIndex);
9266 while (nextListBlock)
9267 {
9268 if (CheckBlock(block&: *nextListBlock, listIndex: nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9269 return true;
9270 nextListBlock = nextListBlock->NextFree();
9271 }
9272
9273 // If failed check null block
9274 if (CheckBlock(block&: *m_NullBlock, listIndex: m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
9275 return true;
9276
9277 // Check best fit bucket
9278 prevListBlock = FindFreeBlock(size: allocSize, listIndex&: prevListIndex);
9279 while (prevListBlock)
9280 {
9281 if (CheckBlock(block&: *prevListBlock, listIndex: prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9282 return true;
9283 prevListBlock = prevListBlock->NextFree();
9284 }
9285 }
9286
9287 // Worst case, full search has to be done
9288 while (++nextListIndex < m_ListsCount)
9289 {
9290 nextListBlock = m_FreeList[nextListIndex];
9291 while (nextListBlock)
9292 {
9293 if (CheckBlock(block&: *nextListBlock, listIndex: nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
9294 return true;
9295 nextListBlock = nextListBlock->NextFree();
9296 }
9297 }
9298
9299 // No more memory sadly
9300 return false;
9301}
9302
9303VkResult VmaBlockMetadata_TLSF::CheckCorruption(const void* pBlockData)
9304{
9305 for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
9306 {
9307 if (!block->IsFree())
9308 {
9309 if (!VmaValidateMagicValue(pData: pBlockData, offset: block->offset + block->size))
9310 {
9311 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
9312 return VK_ERROR_UNKNOWN_COPY;
9313 }
9314 }
9315 }
9316
9317 return VK_SUCCESS;
9318}
9319
9320void VmaBlockMetadata_TLSF::Alloc(
9321 const VmaAllocationRequest& request,
9322 VmaSuballocationType type,
9323 void* userData)
9324{
9325 VMA_ASSERT(request.type == VmaAllocationRequestType::TLSF);
9326
9327 // Get block and pop it from the free list
9328 Block* currentBlock = (Block*)request.allocHandle;
9329 VkDeviceSize offset = request.algorithmData;
9330 VMA_ASSERT(currentBlock != VMA_NULL);
9331 VMA_ASSERT(currentBlock->offset <= offset);
9332
9333 if (currentBlock != m_NullBlock)
9334 RemoveFreeBlock(block: currentBlock);
9335
9336 VkDeviceSize debugMargin = GetDebugMargin();
9337 VkDeviceSize missingAlignment = offset - currentBlock->offset;
9338
9339 // Append missing alignment to prev block or create new one
9340 if (missingAlignment)
9341 {
9342 Block* prevBlock = currentBlock->prevPhysical;
9343 VMA_ASSERT(prevBlock != VMA_NULL && "There should be no missing alignment at offset 0!");
9344
9345 if (prevBlock->IsFree() && prevBlock->size != debugMargin)
9346 {
9347 uint32_t oldList = GetListIndex(size: prevBlock->size);
9348 prevBlock->size += missingAlignment;
9349 // Check if new size crosses list bucket
9350 if (oldList != GetListIndex(size: prevBlock->size))
9351 {
9352 prevBlock->size -= missingAlignment;
9353 RemoveFreeBlock(block: prevBlock);
9354 prevBlock->size += missingAlignment;
9355 InsertFreeBlock(block: prevBlock);
9356 }
9357 else
9358 m_BlocksFreeSize += missingAlignment;
9359 }
9360 else
9361 {
9362 Block* newBlock = m_BlockAllocator.Alloc();
9363 currentBlock->prevPhysical = newBlock;
9364 prevBlock->nextPhysical = newBlock;
9365 newBlock->prevPhysical = prevBlock;
9366 newBlock->nextPhysical = currentBlock;
9367 newBlock->size = missingAlignment;
9368 newBlock->offset = currentBlock->offset;
9369 newBlock->MarkTaken();
9370
9371 InsertFreeBlock(block: newBlock);
9372 }
9373
9374 currentBlock->size -= missingAlignment;
9375 currentBlock->offset += missingAlignment;
9376 }
9377
9378 VkDeviceSize size = request.size + debugMargin;
9379 if (currentBlock->size == size)
9380 {
9381 if (currentBlock == m_NullBlock)
9382 {
9383 // Setup new null block
9384 m_NullBlock = m_BlockAllocator.Alloc();
9385 m_NullBlock->size = 0;
9386 m_NullBlock->offset = currentBlock->offset + size;
9387 m_NullBlock->prevPhysical = currentBlock;
9388 m_NullBlock->nextPhysical = VMA_NULL;
9389 m_NullBlock->MarkFree();
9390 m_NullBlock->PrevFree() = VMA_NULL;
9391 m_NullBlock->NextFree() = VMA_NULL;
9392 currentBlock->nextPhysical = m_NullBlock;
9393 currentBlock->MarkTaken();
9394 }
9395 }
9396 else
9397 {
9398 VMA_ASSERT(currentBlock->size > size && "Proper block already found, shouldn't find smaller one!");
9399
9400 // Create new free block
9401 Block* newBlock = m_BlockAllocator.Alloc();
9402 newBlock->size = currentBlock->size - size;
9403 newBlock->offset = currentBlock->offset + size;
9404 newBlock->prevPhysical = currentBlock;
9405 newBlock->nextPhysical = currentBlock->nextPhysical;
9406 currentBlock->nextPhysical = newBlock;
9407 currentBlock->size = size;
9408
9409 if (currentBlock == m_NullBlock)
9410 {
9411 m_NullBlock = newBlock;
9412 m_NullBlock->MarkFree();
9413 m_NullBlock->NextFree() = VMA_NULL;
9414 m_NullBlock->PrevFree() = VMA_NULL;
9415 currentBlock->MarkTaken();
9416 }
9417 else
9418 {
9419 newBlock->nextPhysical->prevPhysical = newBlock;
9420 newBlock->MarkTaken();
9421 InsertFreeBlock(block: newBlock);
9422 }
9423 }
9424 currentBlock->UserData() = userData;
9425
9426 if (debugMargin > 0)
9427 {
9428 currentBlock->size -= debugMargin;
9429 Block* newBlock = m_BlockAllocator.Alloc();
9430 newBlock->size = debugMargin;
9431 newBlock->offset = currentBlock->offset + currentBlock->size;
9432 newBlock->prevPhysical = currentBlock;
9433 newBlock->nextPhysical = currentBlock->nextPhysical;
9434 newBlock->MarkTaken();
9435 currentBlock->nextPhysical->prevPhysical = newBlock;
9436 currentBlock->nextPhysical = newBlock;
9437 InsertFreeBlock(block: newBlock);
9438 }
9439
9440 if (!IsVirtual())
9441 m_GranularityHandler.AllocPages(allocType: (uint8_t)(uintptr_t)request.customData,
9442 offset: currentBlock->offset, size: currentBlock->size);
9443 ++m_AllocCount;
9444}
9445
9446void VmaBlockMetadata_TLSF::Free(VmaAllocHandle allocHandle)
9447{
9448 Block* block = (Block*)allocHandle;
9449 Block* next = block->nextPhysical;
9450 VMA_ASSERT(!block->IsFree() && "Block is already free!");
9451
9452 if (!IsVirtual())
9453 m_GranularityHandler.FreePages(offset: block->offset, size: block->size);
9454 --m_AllocCount;
9455
9456 VkDeviceSize debugMargin = GetDebugMargin();
9457 if (debugMargin > 0)
9458 {
9459 RemoveFreeBlock(block: next);
9460 MergeBlock(block: next, prev: block);
9461 block = next;
9462 next = next->nextPhysical;
9463 }
9464
9465 // Try merging
9466 Block* prev = block->prevPhysical;
9467 if (prev != VMA_NULL && prev->IsFree() && prev->size != debugMargin)
9468 {
9469 RemoveFreeBlock(block: prev);
9470 MergeBlock(block, prev);
9471 }
9472
9473 if (!next->IsFree())
9474 InsertFreeBlock(block);
9475 else if (next == m_NullBlock)
9476 MergeBlock(block: m_NullBlock, prev: block);
9477 else
9478 {
9479 RemoveFreeBlock(block: next);
9480 MergeBlock(block: next, prev: block);
9481 InsertFreeBlock(block: next);
9482 }
9483}
9484
9485void VmaBlockMetadata_TLSF::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
9486{
9487 Block* block = (Block*)allocHandle;
9488 VMA_ASSERT(!block->IsFree() && "Cannot get allocation info for free block!");
9489 outInfo.offset = block->offset;
9490 outInfo.size = block->size;
9491 outInfo.pUserData = block->UserData();
9492}
9493
9494void* VmaBlockMetadata_TLSF::GetAllocationUserData(VmaAllocHandle allocHandle) const
9495{
9496 Block* block = (Block*)allocHandle;
9497 VMA_ASSERT(!block->IsFree() && "Cannot get user data for free block!");
9498 return block->UserData();
9499}
9500
9501VmaAllocHandle VmaBlockMetadata_TLSF::GetAllocationListBegin() const
9502{
9503 if (m_AllocCount == 0)
9504 return VK_NULL_HANDLE;
9505
9506 for (Block* block = m_NullBlock->prevPhysical; block; block = block->prevPhysical)
9507 {
9508 if (!block->IsFree())
9509 return (VmaAllocHandle)block;
9510 }
9511 VMA_ASSERT(false && "If m_AllocCount > 0 then should find any allocation!");
9512 return VK_NULL_HANDLE;
9513}
9514
9515VmaAllocHandle VmaBlockMetadata_TLSF::GetNextAllocation(VmaAllocHandle prevAlloc) const
9516{
9517 Block* startBlock = (Block*)prevAlloc;
9518 VMA_ASSERT(!startBlock->IsFree() && "Incorrect block!");
9519
9520 for (Block* block = startBlock->prevPhysical; block; block = block->prevPhysical)
9521 {
9522 if (!block->IsFree())
9523 return (VmaAllocHandle)block;
9524 }
9525 return VK_NULL_HANDLE;
9526}
9527
9528VkDeviceSize VmaBlockMetadata_TLSF::GetNextFreeRegionSize(VmaAllocHandle alloc) const
9529{
9530 Block* block = (Block*)alloc;
9531 VMA_ASSERT(!block->IsFree() && "Incorrect block!");
9532
9533 if (block->prevPhysical)
9534 return block->prevPhysical->IsFree() ? block->prevPhysical->size : 0;
9535 return 0;
9536}
9537
9538void VmaBlockMetadata_TLSF::Clear()
9539{
9540 m_AllocCount = 0;
9541 m_BlocksFreeCount = 0;
9542 m_BlocksFreeSize = 0;
9543 m_IsFreeBitmap = 0;
9544 m_NullBlock->offset = 0;
9545 m_NullBlock->size = GetSize();
9546 Block* block = m_NullBlock->prevPhysical;
9547 m_NullBlock->prevPhysical = VMA_NULL;
9548 while (block)
9549 {
9550 Block* prev = block->prevPhysical;
9551 m_BlockAllocator.Free(ptr: block);
9552 block = prev;
9553 }
9554 memset(s: m_FreeList, c: 0, n: m_ListsCount * sizeof(Block*));
9555 memset(s: m_InnerIsFreeBitmap, c: 0, n: m_MemoryClasses * sizeof(uint32_t));
9556 m_GranularityHandler.Clear();
9557}
9558
9559void VmaBlockMetadata_TLSF::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
9560{
9561 Block* block = (Block*)allocHandle;
9562 VMA_ASSERT(!block->IsFree() && "Trying to set user data for not allocated block!");
9563 block->UserData() = userData;
9564}
9565
9566void VmaBlockMetadata_TLSF::DebugLogAllAllocations() const
9567{
9568 for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
9569 if (!block->IsFree())
9570 DebugLogAllocation(offset: block->offset, size: block->size, userData: block->UserData());
9571}
9572
9573uint8_t VmaBlockMetadata_TLSF::SizeToMemoryClass(VkDeviceSize size) const
9574{
9575 if (size > SMALL_BUFFER_SIZE)
9576 return uint8_t(VMA_BITSCAN_MSB(size) - MEMORY_CLASS_SHIFT);
9577 return 0;
9578}
9579
9580uint16_t VmaBlockMetadata_TLSF::SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const
9581{
9582 if (memoryClass == 0)
9583 {
9584 if (IsVirtual())
9585 return static_cast<uint16_t>((size - 1) / 8);
9586 else
9587 return static_cast<uint16_t>((size - 1) / 64);
9588 }
9589 return static_cast<uint16_t>((size >> (memoryClass + MEMORY_CLASS_SHIFT - SECOND_LEVEL_INDEX)) ^ (1U << SECOND_LEVEL_INDEX));
9590}
9591
9592uint32_t VmaBlockMetadata_TLSF::GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const
9593{
9594 if (memoryClass == 0)
9595 return secondIndex;
9596
9597 const uint32_t index = static_cast<uint32_t>(memoryClass - 1) * (1 << SECOND_LEVEL_INDEX) + secondIndex;
9598 if (IsVirtual())
9599 return index + (1 << SECOND_LEVEL_INDEX);
9600 else
9601 return index + 4;
9602}
9603
9604uint32_t VmaBlockMetadata_TLSF::GetListIndex(VkDeviceSize size) const
9605{
9606 uint8_t memoryClass = SizeToMemoryClass(size);
9607 return GetListIndex(memoryClass, secondIndex: SizeToSecondIndex(size, memoryClass));
9608}
9609
9610void VmaBlockMetadata_TLSF::RemoveFreeBlock(Block* block)
9611{
9612 VMA_ASSERT(block != m_NullBlock);
9613 VMA_ASSERT(block->IsFree());
9614
9615 if (block->NextFree() != VMA_NULL)
9616 block->NextFree()->PrevFree() = block->PrevFree();
9617 if (block->PrevFree() != VMA_NULL)
9618 block->PrevFree()->NextFree() = block->NextFree();
9619 else
9620 {
9621 uint8_t memClass = SizeToMemoryClass(size: block->size);
9622 uint16_t secondIndex = SizeToSecondIndex(size: block->size, memoryClass: memClass);
9623 uint32_t index = GetListIndex(memoryClass: memClass, secondIndex);
9624 VMA_ASSERT(m_FreeList[index] == block);
9625 m_FreeList[index] = block->NextFree();
9626 if (block->NextFree() == VMA_NULL)
9627 {
9628 m_InnerIsFreeBitmap[memClass] &= ~(1U << secondIndex);
9629 if (m_InnerIsFreeBitmap[memClass] == 0)
9630 m_IsFreeBitmap &= ~(1UL << memClass);
9631 }
9632 }
9633 block->MarkTaken();
9634 block->UserData() = VMA_NULL;
9635 --m_BlocksFreeCount;
9636 m_BlocksFreeSize -= block->size;
9637}
9638
9639void VmaBlockMetadata_TLSF::InsertFreeBlock(Block* block)
9640{
9641 VMA_ASSERT(block != m_NullBlock);
9642 VMA_ASSERT(!block->IsFree() && "Cannot insert block twice!");
9643
9644 uint8_t memClass = SizeToMemoryClass(size: block->size);
9645 uint16_t secondIndex = SizeToSecondIndex(size: block->size, memoryClass: memClass);
9646 uint32_t index = GetListIndex(memoryClass: memClass, secondIndex);
9647 VMA_ASSERT(index < m_ListsCount);
9648 block->PrevFree() = VMA_NULL;
9649 block->NextFree() = m_FreeList[index];
9650 m_FreeList[index] = block;
9651 if (block->NextFree() != VMA_NULL)
9652 block->NextFree()->PrevFree() = block;
9653 else
9654 {
9655 m_InnerIsFreeBitmap[memClass] |= 1U << secondIndex;
9656 m_IsFreeBitmap |= 1UL << memClass;
9657 }
9658 ++m_BlocksFreeCount;
9659 m_BlocksFreeSize += block->size;
9660}
9661
9662void VmaBlockMetadata_TLSF::MergeBlock(Block* block, Block* prev)
9663{
9664 VMA_ASSERT(block->prevPhysical == prev && "Cannot merge separate physical regions!");
9665 VMA_ASSERT(!prev->IsFree() && "Cannot merge block that belongs to free list!");
9666
9667 block->offset = prev->offset;
9668 block->size += prev->size;
9669 block->prevPhysical = prev->prevPhysical;
9670 if (block->prevPhysical)
9671 block->prevPhysical->nextPhysical = block;
9672 m_BlockAllocator.Free(ptr: prev);
9673}
9674
9675VmaBlockMetadata_TLSF::Block* VmaBlockMetadata_TLSF::FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const
9676{
9677 uint8_t memoryClass = SizeToMemoryClass(size);
9678 uint32_t innerFreeMap = m_InnerIsFreeBitmap[memoryClass] & (~0U << SizeToSecondIndex(size, memoryClass));
9679 if (!innerFreeMap)
9680 {
9681 // Check higher levels for available blocks
9682 uint32_t freeMap = m_IsFreeBitmap & (~0UL << (memoryClass + 1));
9683 if (!freeMap)
9684 return VMA_NULL; // No more memory available
9685
9686 // Find lowest free region
9687 memoryClass = VMA_BITSCAN_LSB(freeMap);
9688 innerFreeMap = m_InnerIsFreeBitmap[memoryClass];
9689 VMA_ASSERT(innerFreeMap != 0);
9690 }
9691 // Find lowest free subregion
9692 listIndex = GetListIndex(memoryClass, VMA_BITSCAN_LSB(innerFreeMap));
9693 VMA_ASSERT(m_FreeList[listIndex]);
9694 return m_FreeList[listIndex];
9695}
9696
9697bool VmaBlockMetadata_TLSF::CheckBlock(
9698 Block& block,
9699 uint32_t listIndex,
9700 VkDeviceSize allocSize,
9701 VkDeviceSize allocAlignment,
9702 VmaSuballocationType allocType,
9703 VmaAllocationRequest* pAllocationRequest)
9704{
9705 VMA_ASSERT(block.IsFree() && "Block is already taken!");
9706
9707 VkDeviceSize alignedOffset = VmaAlignUp(val: block.offset, alignment: allocAlignment);
9708 if (block.size < allocSize + alignedOffset - block.offset)
9709 return false;
9710
9711 // Check for granularity conflicts
9712 if (!IsVirtual() &&
9713 m_GranularityHandler.CheckConflictAndAlignUp(inOutAllocOffset&: alignedOffset, allocSize, blockOffset: block.offset, blockSize: block.size, allocType))
9714 return false;
9715
9716 // Alloc successful
9717 pAllocationRequest->type = VmaAllocationRequestType::TLSF;
9718 pAllocationRequest->allocHandle = (VmaAllocHandle)&block;
9719 pAllocationRequest->size = allocSize - GetDebugMargin();
9720 pAllocationRequest->customData = (void*)allocType;
9721 pAllocationRequest->algorithmData = alignedOffset;
9722
9723 // Place block at the start of list if it's normal block
9724 if (listIndex != m_ListsCount && block.PrevFree())
9725 {
9726 block.PrevFree()->NextFree() = block.NextFree();
9727 if (block.NextFree())
9728 block.NextFree()->PrevFree() = block.PrevFree();
9729 block.PrevFree() = VMA_NULL;
9730 block.NextFree() = m_FreeList[listIndex];
9731 m_FreeList[listIndex] = &block;
9732 if (block.NextFree())
9733 block.NextFree()->PrevFree() = &block;
9734 }
9735
9736 return true;
9737}
9738#endif // _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
9739#endif // _VMA_BLOCK_METADATA_TLSF
9740
9741#ifndef _VMA_BLOCK_VECTOR
9742/*
9743Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
9744Vulkan memory type.
9745
9746Synchronized internally with a mutex.
9747*/
9748class VmaBlockVector
9749{
9750 friend struct VmaDefragmentationContext_T;
9751 VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockVector)
9752public:
9753 VmaBlockVector(
9754 VmaAllocator hAllocator,
9755 VmaPool hParentPool,
9756 uint32_t memoryTypeIndex,
9757 VkDeviceSize preferredBlockSize,
9758 size_t minBlockCount,
9759 size_t maxBlockCount,
9760 VkDeviceSize bufferImageGranularity,
9761 bool explicitBlockSize,
9762 uint32_t algorithm,
9763 float priority,
9764 VkDeviceSize minAllocationAlignment,
9765 void* pMemoryAllocateNext);
9766 ~VmaBlockVector();
9767
9768 VmaAllocator GetAllocator() const { return m_hAllocator; }
9769 VmaPool GetParentPool() const { return m_hParentPool; }
9770 bool IsCustomPool() const { return m_hParentPool != VMA_NULL; }
9771 uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
9772 VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
9773 VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
9774 uint32_t GetAlgorithm() const { return m_Algorithm; }
9775 bool HasExplicitBlockSize() const { return m_ExplicitBlockSize; }
9776 float GetPriority() const { return m_Priority; }
9777 const void* GetAllocationNextPtr() const { return m_pMemoryAllocateNext; }
9778 // To be used only while the m_Mutex is locked. Used during defragmentation.
9779 size_t GetBlockCount() const { return m_Blocks.size(); }
9780 // To be used only while the m_Mutex is locked. Used during defragmentation.
9781 VmaDeviceMemoryBlock* GetBlock(size_t index) const { return m_Blocks[index]; }
9782 VMA_RW_MUTEX &GetMutex() { return m_Mutex; }
9783
9784 VkResult CreateMinBlocks();
9785 void AddStatistics(VmaStatistics& inoutStats);
9786 void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
9787 bool IsEmpty();
9788 bool IsCorruptionDetectionEnabled() const;
9789
9790 VkResult Allocate(
9791 VkDeviceSize size,
9792 VkDeviceSize alignment,
9793 const VmaAllocationCreateInfo& createInfo,
9794 VmaSuballocationType suballocType,
9795 size_t allocationCount,
9796 VmaAllocation* pAllocations);
9797
9798 void Free(const VmaAllocation hAllocation);
9799
9800#if VMA_STATS_STRING_ENABLED
9801 void PrintDetailedMap(class VmaJsonWriter& json);
9802#endif
9803
9804 VkResult CheckCorruption();
9805
9806private:
9807 const VmaAllocator m_hAllocator;
9808 const VmaPool m_hParentPool;
9809 const uint32_t m_MemoryTypeIndex;
9810 const VkDeviceSize m_PreferredBlockSize;
9811 const size_t m_MinBlockCount;
9812 const size_t m_MaxBlockCount;
9813 const VkDeviceSize m_BufferImageGranularity;
9814 const bool m_ExplicitBlockSize;
9815 const uint32_t m_Algorithm;
9816 const float m_Priority;
9817 const VkDeviceSize m_MinAllocationAlignment;
9818
9819 void* const m_pMemoryAllocateNext;
9820 VMA_RW_MUTEX m_Mutex;
9821 // Incrementally sorted by sumFreeSize, ascending.
9822 VmaVector<VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*>> m_Blocks;
9823 uint32_t m_NextBlockId;
9824 bool m_IncrementalSort = true;
9825
9826 void SetIncrementalSort(bool val) { m_IncrementalSort = val; }
9827
9828 VkDeviceSize CalcMaxBlockSize() const;
9829 // Finds and removes given block from vector.
9830 void Remove(VmaDeviceMemoryBlock* pBlock);
9831 // Performs single step in sorting m_Blocks. They may not be fully sorted
9832 // after this call.
9833 void IncrementallySortBlocks();
9834 void SortByFreeSize();
9835
9836 VkResult AllocatePage(
9837 VkDeviceSize size,
9838 VkDeviceSize alignment,
9839 const VmaAllocationCreateInfo& createInfo,
9840 VmaSuballocationType suballocType,
9841 VmaAllocation* pAllocation);
9842
9843 VkResult AllocateFromBlock(
9844 VmaDeviceMemoryBlock* pBlock,
9845 VkDeviceSize size,
9846 VkDeviceSize alignment,
9847 VmaAllocationCreateFlags allocFlags,
9848 void* pUserData,
9849 VmaSuballocationType suballocType,
9850 uint32_t strategy,
9851 VmaAllocation* pAllocation);
9852
9853 VkResult CommitAllocationRequest(
9854 VmaAllocationRequest& allocRequest,
9855 VmaDeviceMemoryBlock* pBlock,
9856 VkDeviceSize alignment,
9857 VmaAllocationCreateFlags allocFlags,
9858 void* pUserData,
9859 VmaSuballocationType suballocType,
9860 VmaAllocation* pAllocation);
9861
9862 VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
9863 bool HasEmptyBlock();
9864};
9865#endif // _VMA_BLOCK_VECTOR
9866
9867#ifndef _VMA_DEFRAGMENTATION_CONTEXT
9868struct VmaDefragmentationContext_T
9869{
9870 VMA_CLASS_NO_COPY_NO_MOVE(VmaDefragmentationContext_T)
9871public:
9872 VmaDefragmentationContext_T(
9873 VmaAllocator hAllocator,
9874 const VmaDefragmentationInfo& info);
9875 ~VmaDefragmentationContext_T();
9876
9877 void GetStats(VmaDefragmentationStats& outStats) { outStats = m_GlobalStats; }
9878
9879 VkResult DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo);
9880 VkResult DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo);
9881
9882private:
9883 // Max number of allocations to ignore due to size constraints before ending single pass
9884 static const uint8_t MAX_ALLOCS_TO_IGNORE = 16;
9885 enum class CounterStatus { Pass, Ignore, End };
9886
9887 struct FragmentedBlock
9888 {
9889 uint32_t data;
9890 VmaDeviceMemoryBlock* block;
9891 };
9892 struct StateBalanced
9893 {
9894 VkDeviceSize avgFreeSize = 0;
9895 VkDeviceSize avgAllocSize = UINT64_MAX;
9896 };
9897 struct StateExtensive
9898 {
9899 enum class Operation : uint8_t
9900 {
9901 FindFreeBlockBuffer, FindFreeBlockTexture, FindFreeBlockAll,
9902 MoveBuffers, MoveTextures, MoveAll,
9903 Cleanup, Done
9904 };
9905
9906 Operation operation = Operation::FindFreeBlockTexture;
9907 size_t firstFreeBlock = SIZE_MAX;
9908 };
9909 struct MoveAllocationData
9910 {
9911 VkDeviceSize size;
9912 VkDeviceSize alignment;
9913 VmaSuballocationType type;
9914 VmaAllocationCreateFlags flags;
9915 VmaDefragmentationMove move = {};
9916 };
9917
9918 const VkDeviceSize m_MaxPassBytes;
9919 const uint32_t m_MaxPassAllocations;
9920 const PFN_vmaCheckDefragmentationBreakFunction m_BreakCallback;
9921 void* m_BreakCallbackUserData;
9922
9923 VmaStlAllocator<VmaDefragmentationMove> m_MoveAllocator;
9924 VmaVector<VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove>> m_Moves;
9925
9926 uint8_t m_IgnoredAllocs = 0;
9927 uint32_t m_Algorithm;
9928 uint32_t m_BlockVectorCount;
9929 VmaBlockVector* m_PoolBlockVector;
9930 VmaBlockVector** m_pBlockVectors;
9931 size_t m_ImmovableBlockCount = 0;
9932 VmaDefragmentationStats m_GlobalStats = { .bytesMoved: 0 };
9933 VmaDefragmentationStats m_PassStats = { .bytesMoved: 0 };
9934 void* m_AlgorithmState = VMA_NULL;
9935
9936 static MoveAllocationData GetMoveData(VmaAllocHandle handle, VmaBlockMetadata* metadata);
9937 CounterStatus CheckCounters(VkDeviceSize bytes);
9938 bool IncrementCounters(VkDeviceSize bytes);
9939 bool ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block);
9940 bool AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector);
9941
9942 bool ComputeDefragmentation(VmaBlockVector& vector, size_t index);
9943 bool ComputeDefragmentation_Fast(VmaBlockVector& vector);
9944 bool ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update);
9945 bool ComputeDefragmentation_Full(VmaBlockVector& vector);
9946 bool ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index);
9947
9948 void UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state);
9949 bool MoveDataToFreeBlocks(VmaSuballocationType currentType,
9950 VmaBlockVector& vector, size_t firstFreeBlock,
9951 bool& texturePresent, bool& bufferPresent, bool& otherPresent);
9952};
9953#endif // _VMA_DEFRAGMENTATION_CONTEXT
9954
9955#ifndef _VMA_POOL_T
9956struct VmaPool_T
9957{
9958 friend struct VmaPoolListItemTraits;
9959 VMA_CLASS_NO_COPY_NO_MOVE(VmaPool_T)
9960public:
9961 VmaBlockVector m_BlockVector;
9962 VmaDedicatedAllocationList m_DedicatedAllocations;
9963
9964 VmaPool_T(
9965 VmaAllocator hAllocator,
9966 const VmaPoolCreateInfo& createInfo,
9967 VkDeviceSize preferredBlockSize);
9968 ~VmaPool_T();
9969
9970 uint32_t GetId() const { return m_Id; }
9971 void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
9972
9973 const char* GetName() const { return m_Name; }
9974 void SetName(const char* pName);
9975
9976#if VMA_STATS_STRING_ENABLED
9977 //void PrintDetailedMap(class VmaStringBuilder& sb);
9978#endif
9979
9980private:
9981 uint32_t m_Id;
9982 char* m_Name;
9983 VmaPool_T* m_PrevPool = VMA_NULL;
9984 VmaPool_T* m_NextPool = VMA_NULL;
9985};
9986
9987struct VmaPoolListItemTraits
9988{
9989 typedef VmaPool_T ItemType;
9990
9991 static ItemType* GetPrev(const ItemType* item) { return item->m_PrevPool; }
9992 static ItemType* GetNext(const ItemType* item) { return item->m_NextPool; }
9993 static ItemType*& AccessPrev(ItemType* item) { return item->m_PrevPool; }
9994 static ItemType*& AccessNext(ItemType* item) { return item->m_NextPool; }
9995};
9996#endif // _VMA_POOL_T
9997
9998#ifndef _VMA_CURRENT_BUDGET_DATA
9999struct VmaCurrentBudgetData
10000{
10001 VMA_CLASS_NO_COPY_NO_MOVE(VmaCurrentBudgetData)
10002public:
10003
10004 VMA_ATOMIC_UINT32 m_BlockCount[VK_MAX_MEMORY_HEAPS];
10005 VMA_ATOMIC_UINT32 m_AllocationCount[VK_MAX_MEMORY_HEAPS];
10006 VMA_ATOMIC_UINT64 m_BlockBytes[VK_MAX_MEMORY_HEAPS];
10007 VMA_ATOMIC_UINT64 m_AllocationBytes[VK_MAX_MEMORY_HEAPS];
10008
10009#if VMA_MEMORY_BUDGET
10010 VMA_ATOMIC_UINT32 m_OperationsSinceBudgetFetch;
10011 VMA_RW_MUTEX m_BudgetMutex;
10012 uint64_t m_VulkanUsage[VK_MAX_MEMORY_HEAPS];
10013 uint64_t m_VulkanBudget[VK_MAX_MEMORY_HEAPS];
10014 uint64_t m_BlockBytesAtBudgetFetch[VK_MAX_MEMORY_HEAPS];
10015#endif // VMA_MEMORY_BUDGET
10016
10017 VmaCurrentBudgetData();
10018
10019 void AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
10020 void RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
10021};
10022
10023#ifndef _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
10024VmaCurrentBudgetData::VmaCurrentBudgetData()
10025{
10026 for (uint32_t heapIndex = 0; heapIndex < VK_MAX_MEMORY_HEAPS; ++heapIndex)
10027 {
10028 m_BlockCount[heapIndex] = 0;
10029 m_AllocationCount[heapIndex] = 0;
10030 m_BlockBytes[heapIndex] = 0;
10031 m_AllocationBytes[heapIndex] = 0;
10032#if VMA_MEMORY_BUDGET
10033 m_VulkanUsage[heapIndex] = 0;
10034 m_VulkanBudget[heapIndex] = 0;
10035 m_BlockBytesAtBudgetFetch[heapIndex] = 0;
10036#endif
10037 }
10038
10039#if VMA_MEMORY_BUDGET
10040 m_OperationsSinceBudgetFetch = 0;
10041#endif
10042}
10043
10044void VmaCurrentBudgetData::AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
10045{
10046 m_AllocationBytes[heapIndex] += allocationSize;
10047 ++m_AllocationCount[heapIndex];
10048#if VMA_MEMORY_BUDGET
10049 ++m_OperationsSinceBudgetFetch;
10050#endif
10051}
10052
10053void VmaCurrentBudgetData::RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
10054{
10055 VMA_ASSERT(m_AllocationBytes[heapIndex] >= allocationSize);
10056 m_AllocationBytes[heapIndex] -= allocationSize;
10057 VMA_ASSERT(m_AllocationCount[heapIndex] > 0);
10058 --m_AllocationCount[heapIndex];
10059#if VMA_MEMORY_BUDGET
10060 ++m_OperationsSinceBudgetFetch;
10061#endif
10062}
10063#endif // _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
10064#endif // _VMA_CURRENT_BUDGET_DATA
10065
10066#ifndef _VMA_ALLOCATION_OBJECT_ALLOCATOR
10067/*
10068Thread-safe wrapper over VmaPoolAllocator free list, for allocation of VmaAllocation_T objects.
10069*/
10070class VmaAllocationObjectAllocator
10071{
10072 VMA_CLASS_NO_COPY_NO_MOVE(VmaAllocationObjectAllocator)
10073public:
10074 VmaAllocationObjectAllocator(const VkAllocationCallbacks* pAllocationCallbacks)
10075 : m_Allocator(pAllocationCallbacks, 1024) {}
10076
10077 template<typename... Types> VmaAllocation Allocate(Types&&... args);
10078 void Free(VmaAllocation hAlloc);
10079
10080private:
10081 VMA_MUTEX m_Mutex;
10082 VmaPoolAllocator<VmaAllocation_T> m_Allocator;
10083};
10084
10085template<typename... Types>
10086VmaAllocation VmaAllocationObjectAllocator::Allocate(Types&&... args)
10087{
10088 VmaMutexLock mutexLock(m_Mutex);
10089 return m_Allocator.Alloc<Types...>(std::forward<Types>(args)...);
10090}
10091
10092void VmaAllocationObjectAllocator::Free(VmaAllocation hAlloc)
10093{
10094 VmaMutexLock mutexLock(m_Mutex);
10095 m_Allocator.Free(ptr: hAlloc);
10096}
10097#endif // _VMA_ALLOCATION_OBJECT_ALLOCATOR
10098
10099#ifndef _VMA_VIRTUAL_BLOCK_T
10100struct VmaVirtualBlock_T
10101{
10102 VMA_CLASS_NO_COPY_NO_MOVE(VmaVirtualBlock_T)
10103public:
10104 const bool m_AllocationCallbacksSpecified;
10105 const VkAllocationCallbacks m_AllocationCallbacks;
10106
10107 VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo);
10108 ~VmaVirtualBlock_T();
10109
10110 VkResult Init() { return VK_SUCCESS; }
10111 bool IsEmpty() const { return m_Metadata->IsEmpty(); }
10112 void Free(VmaVirtualAllocation allocation) { m_Metadata->Free(allocHandle: (VmaAllocHandle)allocation); }
10113 void SetAllocationUserData(VmaVirtualAllocation allocation, void* userData) { m_Metadata->SetAllocationUserData(allocHandle: (VmaAllocHandle)allocation, userData); }
10114 void Clear() { m_Metadata->Clear(); }
10115
10116 const VkAllocationCallbacks* GetAllocationCallbacks() const;
10117 void GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo);
10118 VkResult Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
10119 VkDeviceSize* outOffset);
10120 void GetStatistics(VmaStatistics& outStats) const;
10121 void CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const;
10122#if VMA_STATS_STRING_ENABLED
10123 void BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const;
10124#endif
10125
10126private:
10127 VmaBlockMetadata* m_Metadata;
10128};
10129
10130#ifndef _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
10131VmaVirtualBlock_T::VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo)
10132 : m_AllocationCallbacksSpecified(createInfo.pAllocationCallbacks != VMA_NULL),
10133 m_AllocationCallbacks(createInfo.pAllocationCallbacks != VMA_NULL ? *createInfo.pAllocationCallbacks : VmaEmptyAllocationCallbacks)
10134{
10135 const uint32_t algorithm = createInfo.flags & VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK;
10136 switch (algorithm)
10137 {
10138 case 0:
10139 m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_TLSF)(VK_NULL_HANDLE, 1, true);
10140 break;
10141 case VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT:
10142 m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_Linear)(VK_NULL_HANDLE, 1, true);
10143 break;
10144 default:
10145 VMA_ASSERT(0);
10146 m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_TLSF)(VK_NULL_HANDLE, 1, true);
10147 }
10148
10149 m_Metadata->Init(size: createInfo.size);
10150}
10151
10152VmaVirtualBlock_T::~VmaVirtualBlock_T()
10153{
10154 // Define macro VMA_DEBUG_LOG_FORMAT or more specialized VMA_LEAK_LOG_FORMAT
10155 // to receive the list of the unfreed allocations.
10156 if (!m_Metadata->IsEmpty())
10157 m_Metadata->DebugLogAllAllocations();
10158 // This is the most important assert in the entire library.
10159 // Hitting it means you have some memory leak - unreleased virtual allocations.
10160 VMA_ASSERT_LEAK(m_Metadata->IsEmpty() && "Some virtual allocations were not freed before destruction of this virtual block!");
10161
10162 vma_delete(pAllocationCallbacks: GetAllocationCallbacks(), ptr: m_Metadata);
10163}
10164
10165const VkAllocationCallbacks* VmaVirtualBlock_T::GetAllocationCallbacks() const
10166{
10167 return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
10168}
10169
10170void VmaVirtualBlock_T::GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo)
10171{
10172 m_Metadata->GetAllocationInfo(allocHandle: (VmaAllocHandle)allocation, outInfo);
10173}
10174
10175VkResult VmaVirtualBlock_T::Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
10176 VkDeviceSize* outOffset)
10177{
10178 VmaAllocationRequest request = {};
10179 if (m_Metadata->CreateAllocationRequest(
10180 allocSize: createInfo.size, // allocSize
10181 VMA_MAX(createInfo.alignment, (VkDeviceSize)1), // allocAlignment
10182 upperAddress: (createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0, // upperAddress
10183 allocType: VMA_SUBALLOCATION_TYPE_UNKNOWN, // allocType - unimportant
10184 strategy: createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK, // strategy
10185 pAllocationRequest: &request))
10186 {
10187 m_Metadata->Alloc(request,
10188 type: VMA_SUBALLOCATION_TYPE_UNKNOWN, // type - unimportant
10189 userData: createInfo.pUserData);
10190 outAllocation = (VmaVirtualAllocation)request.allocHandle;
10191 if(outOffset)
10192 *outOffset = m_Metadata->GetAllocationOffset(allocHandle: request.allocHandle);
10193 return VK_SUCCESS;
10194 }
10195 outAllocation = (VmaVirtualAllocation)VK_NULL_HANDLE;
10196 if (outOffset)
10197 *outOffset = UINT64_MAX;
10198 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
10199}
10200
10201void VmaVirtualBlock_T::GetStatistics(VmaStatistics& outStats) const
10202{
10203 VmaClearStatistics(outStats);
10204 m_Metadata->AddStatistics(inoutStats&: outStats);
10205}
10206
10207void VmaVirtualBlock_T::CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const
10208{
10209 VmaClearDetailedStatistics(outStats);
10210 m_Metadata->AddDetailedStatistics(inoutStats&: outStats);
10211}
10212
10213#if VMA_STATS_STRING_ENABLED
10214void VmaVirtualBlock_T::BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const
10215{
10216 VmaJsonWriter json(GetAllocationCallbacks(), sb);
10217 json.BeginObject();
10218
10219 VmaDetailedStatistics stats;
10220 CalculateDetailedStatistics(outStats&: stats);
10221
10222 json.WriteString(pStr: "Stats");
10223 VmaPrintDetailedStatistics(json, stat: stats);
10224
10225 if (detailedMap)
10226 {
10227 json.WriteString(pStr: "Details");
10228 json.BeginObject();
10229 m_Metadata->PrintDetailedMap(json);
10230 json.EndObject();
10231 }
10232
10233 json.EndObject();
10234}
10235#endif // VMA_STATS_STRING_ENABLED
10236#endif // _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
10237#endif // _VMA_VIRTUAL_BLOCK_T
10238
10239
10240// Main allocator object.
10241struct VmaAllocator_T
10242{
10243 VMA_CLASS_NO_COPY_NO_MOVE(VmaAllocator_T)
10244public:
10245 const bool m_UseMutex;
10246 const uint32_t m_VulkanApiVersion;
10247 bool m_UseKhrDedicatedAllocation; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
10248 bool m_UseKhrBindMemory2; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
10249 bool m_UseExtMemoryBudget;
10250 bool m_UseAmdDeviceCoherentMemory;
10251 bool m_UseKhrBufferDeviceAddress;
10252 bool m_UseExtMemoryPriority;
10253 bool m_UseKhrMaintenance4;
10254 bool m_UseKhrMaintenance5;
10255 bool m_UseKhrExternalMemoryWin32;
10256 const VkDevice m_hDevice;
10257 const VkInstance m_hInstance;
10258 const bool m_AllocationCallbacksSpecified;
10259 const VkAllocationCallbacks m_AllocationCallbacks;
10260 VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
10261 VmaAllocationObjectAllocator m_AllocationObjectAllocator;
10262
10263 // Each bit (1 << i) is set if HeapSizeLimit is enabled for that heap, so cannot allocate more than the heap size.
10264 uint32_t m_HeapSizeLimitMask;
10265
10266 VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
10267 VkPhysicalDeviceMemoryProperties m_MemProps;
10268
10269 // Default pools.
10270 VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
10271 VmaDedicatedAllocationList m_DedicatedAllocations[VK_MAX_MEMORY_TYPES];
10272
10273 VmaCurrentBudgetData m_Budget;
10274 VMA_ATOMIC_UINT32 m_DeviceMemoryCount; // Total number of VkDeviceMemory objects.
10275
10276 VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
10277 VkResult Init(const VmaAllocatorCreateInfo* pCreateInfo);
10278 ~VmaAllocator_T();
10279
10280 const VkAllocationCallbacks* GetAllocationCallbacks() const
10281 {
10282 return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
10283 }
10284 const VmaVulkanFunctions& GetVulkanFunctions() const
10285 {
10286 return m_VulkanFunctions;
10287 }
10288
10289 VkPhysicalDevice GetPhysicalDevice() const { return m_PhysicalDevice; }
10290
10291 VkDeviceSize GetBufferImageGranularity() const
10292 {
10293 return VMA_MAX(
10294 static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
10295 m_PhysicalDeviceProperties.limits.bufferImageGranularity);
10296 }
10297
10298 uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
10299 uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
10300
10301 uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
10302 {
10303 VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
10304 return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
10305 }
10306 // True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
10307 bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
10308 {
10309 return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
10310 VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
10311 }
10312 // Minimum alignment for all allocations in specific memory type.
10313 VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
10314 {
10315 return IsMemoryTypeNonCoherent(memTypeIndex) ?
10316 VMA_MAX((VkDeviceSize)VMA_MIN_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
10317 (VkDeviceSize)VMA_MIN_ALIGNMENT;
10318 }
10319
10320 bool IsIntegratedGpu() const
10321 {
10322 return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
10323 }
10324
10325 uint32_t GetGlobalMemoryTypeBits() const { return m_GlobalMemoryTypeBits; }
10326
10327 void GetBufferMemoryRequirements(
10328 VkBuffer hBuffer,
10329 VkMemoryRequirements& memReq,
10330 bool& requiresDedicatedAllocation,
10331 bool& prefersDedicatedAllocation) const;
10332 void GetImageMemoryRequirements(
10333 VkImage hImage,
10334 VkMemoryRequirements& memReq,
10335 bool& requiresDedicatedAllocation,
10336 bool& prefersDedicatedAllocation) const;
10337 VkResult FindMemoryTypeIndex(
10338 uint32_t memoryTypeBits,
10339 const VmaAllocationCreateInfo* pAllocationCreateInfo,
10340 VmaBufferImageUsage bufImgUsage,
10341 uint32_t* pMemoryTypeIndex) const;
10342
10343 // Main allocation function.
10344 VkResult AllocateMemory(
10345 const VkMemoryRequirements& vkMemReq,
10346 bool requiresDedicatedAllocation,
10347 bool prefersDedicatedAllocation,
10348 VkBuffer dedicatedBuffer,
10349 VkImage dedicatedImage,
10350 VmaBufferImageUsage dedicatedBufferImageUsage,
10351 const VmaAllocationCreateInfo& createInfo,
10352 VmaSuballocationType suballocType,
10353 size_t allocationCount,
10354 VmaAllocation* pAllocations);
10355
10356 // Main deallocation function.
10357 void FreeMemory(
10358 size_t allocationCount,
10359 const VmaAllocation* pAllocations);
10360
10361 void CalculateStatistics(VmaTotalStatistics* pStats);
10362
10363 void GetHeapBudgets(
10364 VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount);
10365
10366#if VMA_STATS_STRING_ENABLED
10367 void PrintDetailedMap(class VmaJsonWriter& json);
10368#endif
10369
10370 void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
10371 void GetAllocationInfo2(VmaAllocation hAllocation, VmaAllocationInfo2* pAllocationInfo);
10372
10373 VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
10374 void DestroyPool(VmaPool pool);
10375 void GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats);
10376 void CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats);
10377
10378 void SetCurrentFrameIndex(uint32_t frameIndex);
10379 uint32_t GetCurrentFrameIndex() const { return m_CurrentFrameIndex.load(); }
10380
10381 VkResult CheckPoolCorruption(VmaPool hPool);
10382 VkResult CheckCorruption(uint32_t memoryTypeBits);
10383
10384 // Call to Vulkan function vkAllocateMemory with accompanying bookkeeping.
10385 VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
10386 // Call to Vulkan function vkFreeMemory with accompanying bookkeeping.
10387 void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
10388 // Call to Vulkan function vkBindBufferMemory or vkBindBufferMemory2KHR.
10389 VkResult BindVulkanBuffer(
10390 VkDeviceMemory memory,
10391 VkDeviceSize memoryOffset,
10392 VkBuffer buffer,
10393 const void* pNext);
10394 // Call to Vulkan function vkBindImageMemory or vkBindImageMemory2KHR.
10395 VkResult BindVulkanImage(
10396 VkDeviceMemory memory,
10397 VkDeviceSize memoryOffset,
10398 VkImage image,
10399 const void* pNext);
10400
10401 VkResult Map(VmaAllocation hAllocation, void** ppData);
10402 void Unmap(VmaAllocation hAllocation);
10403
10404 VkResult BindBufferMemory(
10405 VmaAllocation hAllocation,
10406 VkDeviceSize allocationLocalOffset,
10407 VkBuffer hBuffer,
10408 const void* pNext);
10409 VkResult BindImageMemory(
10410 VmaAllocation hAllocation,
10411 VkDeviceSize allocationLocalOffset,
10412 VkImage hImage,
10413 const void* pNext);
10414
10415 VkResult FlushOrInvalidateAllocation(
10416 VmaAllocation hAllocation,
10417 VkDeviceSize offset, VkDeviceSize size,
10418 VMA_CACHE_OPERATION op);
10419 VkResult FlushOrInvalidateAllocations(
10420 uint32_t allocationCount,
10421 const VmaAllocation* allocations,
10422 const VkDeviceSize* offsets, const VkDeviceSize* sizes,
10423 VMA_CACHE_OPERATION op);
10424
10425 VkResult CopyMemoryToAllocation(
10426 const void* pSrcHostPointer,
10427 VmaAllocation dstAllocation,
10428 VkDeviceSize dstAllocationLocalOffset,
10429 VkDeviceSize size);
10430 VkResult CopyAllocationToMemory(
10431 VmaAllocation srcAllocation,
10432 VkDeviceSize srcAllocationLocalOffset,
10433 void* pDstHostPointer,
10434 VkDeviceSize size);
10435
10436 void FillAllocation(const VmaAllocation hAllocation, uint8_t pattern);
10437
10438 /*
10439 Returns bit mask of memory types that can support defragmentation on GPU as
10440 they support creation of required buffer for copy operations.
10441 */
10442 uint32_t GetGpuDefragmentationMemoryTypeBits();
10443
10444#if VMA_EXTERNAL_MEMORY
10445 VkExternalMemoryHandleTypeFlagsKHR GetExternalMemoryHandleTypeFlags(uint32_t memTypeIndex) const
10446 {
10447 return m_TypeExternalMemoryHandleTypes[memTypeIndex];
10448 }
10449#endif // #if VMA_EXTERNAL_MEMORY
10450
10451private:
10452 VkDeviceSize m_PreferredLargeHeapBlockSize;
10453
10454 VkPhysicalDevice m_PhysicalDevice;
10455 VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
10456 VMA_ATOMIC_UINT32 m_GpuDefragmentationMemoryTypeBits; // UINT32_MAX means uninitialized.
10457#if VMA_EXTERNAL_MEMORY
10458 VkExternalMemoryHandleTypeFlagsKHR m_TypeExternalMemoryHandleTypes[VK_MAX_MEMORY_TYPES];
10459#endif // #if VMA_EXTERNAL_MEMORY
10460
10461 VMA_RW_MUTEX m_PoolsMutex;
10462 typedef VmaIntrusiveLinkedList<VmaPoolListItemTraits> PoolList;
10463 // Protected by m_PoolsMutex.
10464 PoolList m_Pools;
10465 uint32_t m_NextPoolId;
10466
10467 VmaVulkanFunctions m_VulkanFunctions;
10468
10469 // Global bit mask AND-ed with any memoryTypeBits to disallow certain memory types.
10470 uint32_t m_GlobalMemoryTypeBits;
10471
10472 void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
10473
10474#if VMA_STATIC_VULKAN_FUNCTIONS == 1
10475 void ImportVulkanFunctions_Static();
10476#endif
10477
10478 void ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions);
10479
10480#if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
10481 void ImportVulkanFunctions_Dynamic();
10482#endif
10483
10484 void ValidateVulkanFunctions();
10485
10486 VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
10487
10488 VkResult AllocateMemoryOfType(
10489 VmaPool pool,
10490 VkDeviceSize size,
10491 VkDeviceSize alignment,
10492 bool dedicatedPreferred,
10493 VkBuffer dedicatedBuffer,
10494 VkImage dedicatedImage,
10495 VmaBufferImageUsage dedicatedBufferImageUsage,
10496 const VmaAllocationCreateInfo& createInfo,
10497 uint32_t memTypeIndex,
10498 VmaSuballocationType suballocType,
10499 VmaDedicatedAllocationList& dedicatedAllocations,
10500 VmaBlockVector& blockVector,
10501 size_t allocationCount,
10502 VmaAllocation* pAllocations);
10503
10504 // Helper function only to be used inside AllocateDedicatedMemory.
10505 VkResult AllocateDedicatedMemoryPage(
10506 VmaPool pool,
10507 VkDeviceSize size,
10508 VmaSuballocationType suballocType,
10509 uint32_t memTypeIndex,
10510 const VkMemoryAllocateInfo& allocInfo,
10511 bool map,
10512 bool isUserDataString,
10513 bool isMappingAllowed,
10514 void* pUserData,
10515 VmaAllocation* pAllocation);
10516
10517 // Allocates and registers new VkDeviceMemory specifically for dedicated allocations.
10518 VkResult AllocateDedicatedMemory(
10519 VmaPool pool,
10520 VkDeviceSize size,
10521 VmaSuballocationType suballocType,
10522 VmaDedicatedAllocationList& dedicatedAllocations,
10523 uint32_t memTypeIndex,
10524 bool map,
10525 bool isUserDataString,
10526 bool isMappingAllowed,
10527 bool canAliasMemory,
10528 void* pUserData,
10529 float priority,
10530 VkBuffer dedicatedBuffer,
10531 VkImage dedicatedImage,
10532 VmaBufferImageUsage dedicatedBufferImageUsage,
10533 size_t allocationCount,
10534 VmaAllocation* pAllocations,
10535 const void* pNextChain = VMA_NULL);
10536
10537 void FreeDedicatedMemory(const VmaAllocation allocation);
10538
10539 VkResult CalcMemTypeParams(
10540 VmaAllocationCreateInfo& outCreateInfo,
10541 uint32_t memTypeIndex,
10542 VkDeviceSize size,
10543 size_t allocationCount);
10544 VkResult CalcAllocationParams(
10545 VmaAllocationCreateInfo& outCreateInfo,
10546 bool dedicatedRequired,
10547 bool dedicatedPreferred);
10548
10549 /*
10550 Calculates and returns bit mask of memory types that can support defragmentation
10551 on GPU as they support creation of required buffer for copy operations.
10552 */
10553 uint32_t CalculateGpuDefragmentationMemoryTypeBits() const;
10554 uint32_t CalculateGlobalMemoryTypeBits() const;
10555
10556 bool GetFlushOrInvalidateRange(
10557 VmaAllocation allocation,
10558 VkDeviceSize offset, VkDeviceSize size,
10559 VkMappedMemoryRange& outRange) const;
10560
10561#if VMA_MEMORY_BUDGET
10562 void UpdateVulkanBudget();
10563#endif // #if VMA_MEMORY_BUDGET
10564};
10565
10566
10567#ifndef _VMA_MEMORY_FUNCTIONS
10568static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
10569{
10570 return VmaMalloc(pAllocationCallbacks: &hAllocator->m_AllocationCallbacks, size, alignment);
10571}
10572
10573static void VmaFree(VmaAllocator hAllocator, void* ptr)
10574{
10575 VmaFree(pAllocationCallbacks: &hAllocator->m_AllocationCallbacks, ptr);
10576}
10577
10578template<typename T>
10579static T* VmaAllocate(VmaAllocator hAllocator)
10580{
10581 return (T*)VmaMalloc(hAllocator, size: sizeof(T), VMA_ALIGN_OF(T));
10582}
10583
10584template<typename T>
10585static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
10586{
10587 return (T*)VmaMalloc(hAllocator, size: sizeof(T) * count, VMA_ALIGN_OF(T));
10588}
10589
10590template<typename T>
10591static void vma_delete(VmaAllocator hAllocator, T* ptr)
10592{
10593 if(ptr != VMA_NULL)
10594 {
10595 ptr->~T();
10596 VmaFree(hAllocator, ptr);
10597 }
10598}
10599
10600template<typename T>
10601static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
10602{
10603 if(ptr != VMA_NULL)
10604 {
10605 for(size_t i = count; i--; )
10606 ptr[i].~T();
10607 VmaFree(hAllocator, ptr);
10608 }
10609}
10610#endif // _VMA_MEMORY_FUNCTIONS
10611
10612#ifndef _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
10613VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator)
10614 : m_pMetadata(VMA_NULL),
10615 m_MemoryTypeIndex(UINT32_MAX),
10616 m_Id(0),
10617 m_hMemory(VK_NULL_HANDLE),
10618 m_MapCount(0),
10619 m_pMappedData(VMA_NULL){}
10620
10621VmaDeviceMemoryBlock::~VmaDeviceMemoryBlock()
10622{
10623 VMA_ASSERT_LEAK(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
10624 VMA_ASSERT_LEAK(m_hMemory == VK_NULL_HANDLE);
10625}
10626
10627void VmaDeviceMemoryBlock::Init(
10628 VmaAllocator hAllocator,
10629 VmaPool hParentPool,
10630 uint32_t newMemoryTypeIndex,
10631 VkDeviceMemory newMemory,
10632 VkDeviceSize newSize,
10633 uint32_t id,
10634 uint32_t algorithm,
10635 VkDeviceSize bufferImageGranularity)
10636{
10637 VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
10638
10639 m_hParentPool = hParentPool;
10640 m_MemoryTypeIndex = newMemoryTypeIndex;
10641 m_Id = id;
10642 m_hMemory = newMemory;
10643
10644 switch (algorithm)
10645 {
10646 case 0:
10647 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_TLSF)(hAllocator->GetAllocationCallbacks(),
10648 bufferImageGranularity, false); // isVirtual
10649 break;
10650 case VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT:
10651 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Linear)(hAllocator->GetAllocationCallbacks(),
10652 bufferImageGranularity, false); // isVirtual
10653 break;
10654 default:
10655 VMA_ASSERT(0);
10656 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_TLSF)(hAllocator->GetAllocationCallbacks(),
10657 bufferImageGranularity, false); // isVirtual
10658 }
10659 m_pMetadata->Init(size: newSize);
10660}
10661
10662void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
10663{
10664 // Define macro VMA_DEBUG_LOG_FORMAT or more specialized VMA_LEAK_LOG_FORMAT
10665 // to receive the list of the unfreed allocations.
10666 if (!m_pMetadata->IsEmpty())
10667 m_pMetadata->DebugLogAllAllocations();
10668 // This is the most important assert in the entire library.
10669 // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
10670 VMA_ASSERT_LEAK(m_pMetadata->IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
10671
10672 VMA_ASSERT_LEAK(m_hMemory != VK_NULL_HANDLE);
10673 allocator->FreeVulkanMemory(memoryType: m_MemoryTypeIndex, size: m_pMetadata->GetSize(), hMemory: m_hMemory);
10674 m_hMemory = VK_NULL_HANDLE;
10675
10676 vma_delete(hAllocator: allocator, ptr: m_pMetadata);
10677 m_pMetadata = VMA_NULL;
10678}
10679
10680void VmaDeviceMemoryBlock::PostAlloc(VmaAllocator hAllocator)
10681{
10682 VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
10683 m_MappingHysteresis.PostAlloc();
10684}
10685
10686void VmaDeviceMemoryBlock::PostFree(VmaAllocator hAllocator)
10687{
10688 VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
10689 if(m_MappingHysteresis.PostFree())
10690 {
10691 VMA_ASSERT(m_MappingHysteresis.GetExtraMapping() == 0);
10692 if (m_MapCount == 0)
10693 {
10694 m_pMappedData = VMA_NULL;
10695 (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
10696 }
10697 }
10698}
10699
10700bool VmaDeviceMemoryBlock::Validate() const
10701{
10702 VMA_VALIDATE((m_hMemory != VK_NULL_HANDLE) &&
10703 (m_pMetadata->GetSize() != 0));
10704
10705 return m_pMetadata->Validate();
10706}
10707
10708VkResult VmaDeviceMemoryBlock::CheckCorruption(VmaAllocator hAllocator)
10709{
10710 void* pData = VMA_NULL;
10711 VkResult res = Map(hAllocator, count: 1, ppData: &pData);
10712 if (res != VK_SUCCESS)
10713 {
10714 return res;
10715 }
10716
10717 res = m_pMetadata->CheckCorruption(pBlockData: pData);
10718
10719 Unmap(hAllocator, count: 1);
10720
10721 return res;
10722}
10723
10724VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
10725{
10726 if (count == 0)
10727 {
10728 return VK_SUCCESS;
10729 }
10730
10731 VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
10732 const uint32_t oldTotalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
10733 if (oldTotalMapCount != 0)
10734 {
10735 VMA_ASSERT(m_pMappedData != VMA_NULL);
10736 m_MappingHysteresis.PostMap();
10737 m_MapCount += count;
10738 if (ppData != VMA_NULL)
10739 {
10740 *ppData = m_pMappedData;
10741 }
10742 return VK_SUCCESS;
10743 }
10744 else
10745 {
10746 VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
10747 hAllocator->m_hDevice,
10748 m_hMemory,
10749 0, // offset
10750 VK_WHOLE_SIZE,
10751 0, // flags
10752 &m_pMappedData);
10753 if (result == VK_SUCCESS)
10754 {
10755 VMA_ASSERT(m_pMappedData != VMA_NULL);
10756 m_MappingHysteresis.PostMap();
10757 m_MapCount = count;
10758 if (ppData != VMA_NULL)
10759 {
10760 *ppData = m_pMappedData;
10761 }
10762 }
10763 return result;
10764 }
10765}
10766
10767void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
10768{
10769 if (count == 0)
10770 {
10771 return;
10772 }
10773
10774 VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
10775 if (m_MapCount >= count)
10776 {
10777 m_MapCount -= count;
10778 const uint32_t totalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
10779 if (totalMapCount == 0)
10780 {
10781 m_pMappedData = VMA_NULL;
10782 (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
10783 }
10784 m_MappingHysteresis.PostUnmap();
10785 }
10786 else
10787 {
10788 VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
10789 }
10790}
10791
10792VkResult VmaDeviceMemoryBlock::WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
10793{
10794 VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
10795
10796 void* pData;
10797 VkResult res = Map(hAllocator, count: 1, ppData: &pData);
10798 if (res != VK_SUCCESS)
10799 {
10800 return res;
10801 }
10802
10803 VmaWriteMagicValue(pData, offset: allocOffset + allocSize);
10804
10805 Unmap(hAllocator, count: 1);
10806 return VK_SUCCESS;
10807}
10808
10809VkResult VmaDeviceMemoryBlock::ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
10810{
10811 VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
10812
10813 void* pData;
10814 VkResult res = Map(hAllocator, count: 1, ppData: &pData);
10815 if (res != VK_SUCCESS)
10816 {
10817 return res;
10818 }
10819
10820 if (!VmaValidateMagicValue(pData, offset: allocOffset + allocSize))
10821 {
10822 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
10823 }
10824
10825 Unmap(hAllocator, count: 1);
10826 return VK_SUCCESS;
10827}
10828
10829VkResult VmaDeviceMemoryBlock::BindBufferMemory(
10830 const VmaAllocator hAllocator,
10831 const VmaAllocation hAllocation,
10832 VkDeviceSize allocationLocalOffset,
10833 VkBuffer hBuffer,
10834 const void* pNext)
10835{
10836 VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
10837 hAllocation->GetBlock() == this);
10838 VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
10839 "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
10840 const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
10841 // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
10842 VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
10843 return hAllocator->BindVulkanBuffer(memory: m_hMemory, memoryOffset, buffer: hBuffer, pNext);
10844}
10845
10846VkResult VmaDeviceMemoryBlock::BindImageMemory(
10847 const VmaAllocator hAllocator,
10848 const VmaAllocation hAllocation,
10849 VkDeviceSize allocationLocalOffset,
10850 VkImage hImage,
10851 const void* pNext)
10852{
10853 VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
10854 hAllocation->GetBlock() == this);
10855 VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
10856 "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
10857 const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
10858 // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
10859 VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
10860 return hAllocator->BindVulkanImage(memory: m_hMemory, memoryOffset, image: hImage, pNext);
10861}
10862
10863#if VMA_EXTERNAL_MEMORY_WIN32
10864VkResult VmaDeviceMemoryBlock::CreateWin32Handle(const VmaAllocator hAllocator, PFN_vkGetMemoryWin32HandleKHR pvkGetMemoryWin32HandleKHR, HANDLE hTargetProcess, HANDLE* pHandle) noexcept
10865{
10866 VMA_ASSERT(pHandle);
10867 return m_Handle.GetHandle(hAllocator->m_hDevice, m_hMemory, pvkGetMemoryWin32HandleKHR, hTargetProcess, hAllocator->m_UseMutex, pHandle);
10868}
10869#endif // VMA_EXTERNAL_MEMORY_WIN32
10870#endif // _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
10871
10872#ifndef _VMA_ALLOCATION_T_FUNCTIONS
10873VmaAllocation_T::VmaAllocation_T(bool mappingAllowed)
10874 : m_Alignment{ 1 },
10875 m_Size{ 0 },
10876 m_pUserData{ VMA_NULL },
10877 m_pName{ VMA_NULL },
10878 m_MemoryTypeIndex{ 0 },
10879 m_Type{ (uint8_t)ALLOCATION_TYPE_NONE },
10880 m_SuballocationType{ (uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN },
10881 m_MapCount{ 0 },
10882 m_Flags{ 0 }
10883{
10884 if(mappingAllowed)
10885 m_Flags |= (uint8_t)FLAG_MAPPING_ALLOWED;
10886}
10887
10888VmaAllocation_T::~VmaAllocation_T()
10889{
10890 VMA_ASSERT_LEAK(m_MapCount == 0 && "Allocation was not unmapped before destruction.");
10891
10892 // Check if owned string was freed.
10893 VMA_ASSERT(m_pName == VMA_NULL);
10894}
10895
10896void VmaAllocation_T::InitBlockAllocation(
10897 VmaDeviceMemoryBlock* block,
10898 VmaAllocHandle allocHandle,
10899 VkDeviceSize alignment,
10900 VkDeviceSize size,
10901 uint32_t memoryTypeIndex,
10902 VmaSuballocationType suballocationType,
10903 bool mapped)
10904{
10905 VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
10906 VMA_ASSERT(block != VMA_NULL);
10907 m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
10908 m_Alignment = alignment;
10909 m_Size = size;
10910 m_MemoryTypeIndex = memoryTypeIndex;
10911 if(mapped)
10912 {
10913 VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
10914 m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
10915 }
10916 m_SuballocationType = (uint8_t)suballocationType;
10917 m_BlockAllocation.m_Block = block;
10918 m_BlockAllocation.m_AllocHandle = allocHandle;
10919}
10920
10921void VmaAllocation_T::InitDedicatedAllocation(
10922 VmaAllocator allocator,
10923 VmaPool hParentPool,
10924 uint32_t memoryTypeIndex,
10925 VkDeviceMemory hMemory,
10926 VmaSuballocationType suballocationType,
10927 void* pMappedData,
10928 VkDeviceSize size)
10929{
10930 VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
10931 VMA_ASSERT(hMemory != VK_NULL_HANDLE);
10932 m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
10933 m_Alignment = 0;
10934 m_Size = size;
10935 m_MemoryTypeIndex = memoryTypeIndex;
10936 m_SuballocationType = (uint8_t)suballocationType;
10937 m_DedicatedAllocation.m_ExtraData = VMA_NULL;
10938 m_DedicatedAllocation.m_hParentPool = hParentPool;
10939 m_DedicatedAllocation.m_hMemory = hMemory;
10940 m_DedicatedAllocation.m_Prev = VMA_NULL;
10941 m_DedicatedAllocation.m_Next = VMA_NULL;
10942
10943 if (pMappedData != VMA_NULL)
10944 {
10945 VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
10946 m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
10947 EnsureExtraData(hAllocator: allocator);
10948 m_DedicatedAllocation.m_ExtraData->m_pMappedData = pMappedData;
10949 }
10950}
10951
10952void VmaAllocation_T::Destroy(VmaAllocator allocator)
10953{
10954 FreeName(hAllocator: allocator);
10955
10956 if (GetType() == ALLOCATION_TYPE_DEDICATED)
10957 {
10958 vma_delete(hAllocator: allocator, ptr: m_DedicatedAllocation.m_ExtraData);
10959 }
10960}
10961
10962void VmaAllocation_T::SetName(VmaAllocator hAllocator, const char* pName)
10963{
10964 VMA_ASSERT(pName == VMA_NULL || pName != m_pName);
10965
10966 FreeName(hAllocator);
10967
10968 if (pName != VMA_NULL)
10969 m_pName = VmaCreateStringCopy(allocs: hAllocator->GetAllocationCallbacks(), srcStr: pName);
10970}
10971
10972uint8_t VmaAllocation_T::SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation)
10973{
10974 VMA_ASSERT(allocation != VMA_NULL);
10975 VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
10976 VMA_ASSERT(allocation->m_Type == ALLOCATION_TYPE_BLOCK);
10977
10978 if (m_MapCount != 0)
10979 m_BlockAllocation.m_Block->Unmap(hAllocator, count: m_MapCount);
10980
10981 m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(allocHandle: m_BlockAllocation.m_AllocHandle, userData: allocation);
10982 std::swap(a&: m_BlockAllocation, b&: allocation->m_BlockAllocation);
10983 m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(allocHandle: m_BlockAllocation.m_AllocHandle, userData: this);
10984
10985#if VMA_STATS_STRING_ENABLED
10986 std::swap(a&: m_BufferImageUsage, b&: allocation->m_BufferImageUsage);
10987#endif
10988 return m_MapCount;
10989}
10990
10991VmaAllocHandle VmaAllocation_T::GetAllocHandle() const
10992{
10993 switch (m_Type)
10994 {
10995 case ALLOCATION_TYPE_BLOCK:
10996 return m_BlockAllocation.m_AllocHandle;
10997 case ALLOCATION_TYPE_DEDICATED:
10998 return VK_NULL_HANDLE;
10999 default:
11000 VMA_ASSERT(0);
11001 return VK_NULL_HANDLE;
11002 }
11003}
11004
11005VkDeviceSize VmaAllocation_T::GetOffset() const
11006{
11007 switch (m_Type)
11008 {
11009 case ALLOCATION_TYPE_BLOCK:
11010 return m_BlockAllocation.m_Block->m_pMetadata->GetAllocationOffset(allocHandle: m_BlockAllocation.m_AllocHandle);
11011 case ALLOCATION_TYPE_DEDICATED:
11012 return 0;
11013 default:
11014 VMA_ASSERT(0);
11015 return 0;
11016 }
11017}
11018
11019VmaPool VmaAllocation_T::GetParentPool() const
11020{
11021 switch (m_Type)
11022 {
11023 case ALLOCATION_TYPE_BLOCK:
11024 return m_BlockAllocation.m_Block->GetParentPool();
11025 case ALLOCATION_TYPE_DEDICATED:
11026 return m_DedicatedAllocation.m_hParentPool;
11027 default:
11028 VMA_ASSERT(0);
11029 return VK_NULL_HANDLE;
11030 }
11031}
11032
11033VkDeviceMemory VmaAllocation_T::GetMemory() const
11034{
11035 switch (m_Type)
11036 {
11037 case ALLOCATION_TYPE_BLOCK:
11038 return m_BlockAllocation.m_Block->GetDeviceMemory();
11039 case ALLOCATION_TYPE_DEDICATED:
11040 return m_DedicatedAllocation.m_hMemory;
11041 default:
11042 VMA_ASSERT(0);
11043 return VK_NULL_HANDLE;
11044 }
11045}
11046
11047void* VmaAllocation_T::GetMappedData() const
11048{
11049 switch (m_Type)
11050 {
11051 case ALLOCATION_TYPE_BLOCK:
11052 if (m_MapCount != 0 || IsPersistentMap())
11053 {
11054 void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
11055 VMA_ASSERT(pBlockData != VMA_NULL);
11056 return (char*)pBlockData + GetOffset();
11057 }
11058 else
11059 {
11060 return VMA_NULL;
11061 }
11062 break;
11063 case ALLOCATION_TYPE_DEDICATED:
11064 VMA_ASSERT((m_DedicatedAllocation.m_ExtraData != VMA_NULL && m_DedicatedAllocation.m_ExtraData->m_pMappedData != VMA_NULL) ==
11065 (m_MapCount != 0 || IsPersistentMap()));
11066 return m_DedicatedAllocation.m_ExtraData != VMA_NULL ? m_DedicatedAllocation.m_ExtraData->m_pMappedData : VMA_NULL;
11067 default:
11068 VMA_ASSERT(0);
11069 return VMA_NULL;
11070 }
11071}
11072
11073void VmaAllocation_T::BlockAllocMap()
11074{
11075 VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
11076 VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
11077
11078 if (m_MapCount < 0xFF)
11079 {
11080 ++m_MapCount;
11081 }
11082 else
11083 {
11084 VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
11085 }
11086}
11087
11088void VmaAllocation_T::BlockAllocUnmap()
11089{
11090 VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
11091
11092 if (m_MapCount > 0)
11093 {
11094 --m_MapCount;
11095 }
11096 else
11097 {
11098 VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
11099 }
11100}
11101
11102VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
11103{
11104 VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
11105 VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
11106
11107 EnsureExtraData(hAllocator);
11108
11109 if (m_MapCount != 0 || IsPersistentMap())
11110 {
11111 if (m_MapCount < 0xFF)
11112 {
11113 VMA_ASSERT(m_DedicatedAllocation.m_ExtraData->m_pMappedData != VMA_NULL);
11114 *ppData = m_DedicatedAllocation.m_ExtraData->m_pMappedData;
11115 ++m_MapCount;
11116 return VK_SUCCESS;
11117 }
11118 else
11119 {
11120 VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
11121 return VK_ERROR_MEMORY_MAP_FAILED;
11122 }
11123 }
11124 else
11125 {
11126 VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
11127 hAllocator->m_hDevice,
11128 m_DedicatedAllocation.m_hMemory,
11129 0, // offset
11130 VK_WHOLE_SIZE,
11131 0, // flags
11132 ppData);
11133 if (result == VK_SUCCESS)
11134 {
11135 m_DedicatedAllocation.m_ExtraData->m_pMappedData = *ppData;
11136 m_MapCount = 1;
11137 }
11138 return result;
11139 }
11140}
11141
11142void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
11143{
11144 VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
11145
11146 if (m_MapCount > 0)
11147 {
11148 --m_MapCount;
11149 if (m_MapCount == 0 && !IsPersistentMap())
11150 {
11151 VMA_ASSERT(m_DedicatedAllocation.m_ExtraData != VMA_NULL);
11152 m_DedicatedAllocation.m_ExtraData->m_pMappedData = VMA_NULL;
11153 (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
11154 hAllocator->m_hDevice,
11155 m_DedicatedAllocation.m_hMemory);
11156 }
11157 }
11158 else
11159 {
11160 VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
11161 }
11162}
11163
11164#if VMA_STATS_STRING_ENABLED
11165void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
11166{
11167 json.WriteString(pStr: "Type");
11168 json.WriteString(pStr: VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
11169
11170 json.WriteString(pStr: "Size");
11171 json.WriteNumber(n: m_Size);
11172 json.WriteString(pStr: "Usage");
11173 json.WriteNumber(n: m_BufferImageUsage.Value); // It may be uint32_t or uint64_t.
11174
11175 if (m_pUserData != VMA_NULL)
11176 {
11177 json.WriteString(pStr: "CustomData");
11178 json.BeginString();
11179 json.ContinueString_Pointer(ptr: m_pUserData);
11180 json.EndString();
11181 }
11182 if (m_pName != VMA_NULL)
11183 {
11184 json.WriteString(pStr: "Name");
11185 json.WriteString(pStr: m_pName);
11186 }
11187}
11188#if VMA_EXTERNAL_MEMORY_WIN32
11189VkResult VmaAllocation_T::GetWin32Handle(VmaAllocator hAllocator, HANDLE hTargetProcess, HANDLE* pHandle) noexcept
11190{
11191 auto pvkGetMemoryWin32HandleKHR = hAllocator->GetVulkanFunctions().vkGetMemoryWin32HandleKHR;
11192 switch (m_Type)
11193 {
11194 case ALLOCATION_TYPE_BLOCK:
11195 return m_BlockAllocation.m_Block->CreateWin32Handle(hAllocator, pvkGetMemoryWin32HandleKHR, hTargetProcess, pHandle);
11196 case ALLOCATION_TYPE_DEDICATED:
11197 EnsureExtraData(hAllocator);
11198 return m_DedicatedAllocation.m_ExtraData->m_Handle.GetHandle(hAllocator->m_hDevice, m_DedicatedAllocation.m_hMemory, pvkGetMemoryWin32HandleKHR, hTargetProcess, hAllocator->m_UseMutex, pHandle);
11199 default:
11200 VMA_ASSERT(0);
11201 return VK_ERROR_FEATURE_NOT_PRESENT;
11202 }
11203}
11204#endif // VMA_EXTERNAL_MEMORY_WIN32
11205#endif // VMA_STATS_STRING_ENABLED
11206
11207void VmaAllocation_T::EnsureExtraData(VmaAllocator hAllocator)
11208{
11209 if (m_DedicatedAllocation.m_ExtraData == VMA_NULL)
11210 {
11211 m_DedicatedAllocation.m_ExtraData = vma_new(hAllocator, VmaAllocationExtraData)();
11212 }
11213}
11214
11215void VmaAllocation_T::FreeName(VmaAllocator hAllocator)
11216{
11217 if(m_pName)
11218 {
11219 VmaFreeString(allocs: hAllocator->GetAllocationCallbacks(), str: m_pName);
11220 m_pName = VMA_NULL;
11221 }
11222}
11223#endif // _VMA_ALLOCATION_T_FUNCTIONS
11224
11225#ifndef _VMA_BLOCK_VECTOR_FUNCTIONS
11226VmaBlockVector::VmaBlockVector(
11227 VmaAllocator hAllocator,
11228 VmaPool hParentPool,
11229 uint32_t memoryTypeIndex,
11230 VkDeviceSize preferredBlockSize,
11231 size_t minBlockCount,
11232 size_t maxBlockCount,
11233 VkDeviceSize bufferImageGranularity,
11234 bool explicitBlockSize,
11235 uint32_t algorithm,
11236 float priority,
11237 VkDeviceSize minAllocationAlignment,
11238 void* pMemoryAllocateNext)
11239 : m_hAllocator(hAllocator),
11240 m_hParentPool(hParentPool),
11241 m_MemoryTypeIndex(memoryTypeIndex),
11242 m_PreferredBlockSize(preferredBlockSize),
11243 m_MinBlockCount(minBlockCount),
11244 m_MaxBlockCount(maxBlockCount),
11245 m_BufferImageGranularity(bufferImageGranularity),
11246 m_ExplicitBlockSize(explicitBlockSize),
11247 m_Algorithm(algorithm),
11248 m_Priority(priority),
11249 m_MinAllocationAlignment(minAllocationAlignment),
11250 m_pMemoryAllocateNext(pMemoryAllocateNext),
11251 m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
11252 m_NextBlockId(0) {}
11253
11254VmaBlockVector::~VmaBlockVector()
11255{
11256 for (size_t i = m_Blocks.size(); i--; )
11257 {
11258 m_Blocks[i]->Destroy(allocator: m_hAllocator);
11259 vma_delete(hAllocator: m_hAllocator, ptr: m_Blocks[i]);
11260 }
11261}
11262
11263VkResult VmaBlockVector::CreateMinBlocks()
11264{
11265 for (size_t i = 0; i < m_MinBlockCount; ++i)
11266 {
11267 VkResult res = CreateBlock(blockSize: m_PreferredBlockSize, VMA_NULL);
11268 if (res != VK_SUCCESS)
11269 {
11270 return res;
11271 }
11272 }
11273 return VK_SUCCESS;
11274}
11275
11276void VmaBlockVector::AddStatistics(VmaStatistics& inoutStats)
11277{
11278 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
11279
11280 const size_t blockCount = m_Blocks.size();
11281 for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
11282 {
11283 const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
11284 VMA_ASSERT(pBlock);
11285 VMA_HEAVY_ASSERT(pBlock->Validate());
11286 pBlock->m_pMetadata->AddStatistics(inoutStats);
11287 }
11288}
11289
11290void VmaBlockVector::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
11291{
11292 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
11293
11294 const size_t blockCount = m_Blocks.size();
11295 for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
11296 {
11297 const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
11298 VMA_ASSERT(pBlock);
11299 VMA_HEAVY_ASSERT(pBlock->Validate());
11300 pBlock->m_pMetadata->AddDetailedStatistics(inoutStats);
11301 }
11302}
11303
11304bool VmaBlockVector::IsEmpty()
11305{
11306 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
11307 return m_Blocks.empty();
11308}
11309
11310bool VmaBlockVector::IsCorruptionDetectionEnabled() const
11311{
11312 const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
11313 return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
11314 (VMA_DEBUG_MARGIN > 0) &&
11315 (m_Algorithm == 0 || m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT) &&
11316 (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
11317}
11318
11319VkResult VmaBlockVector::Allocate(
11320 VkDeviceSize size,
11321 VkDeviceSize alignment,
11322 const VmaAllocationCreateInfo& createInfo,
11323 VmaSuballocationType suballocType,
11324 size_t allocationCount,
11325 VmaAllocation* pAllocations)
11326{
11327 size_t allocIndex;
11328 VkResult res = VK_SUCCESS;
11329
11330 alignment = VMA_MAX(alignment, m_MinAllocationAlignment);
11331
11332 if (IsCorruptionDetectionEnabled())
11333 {
11334 size = VmaAlignUp<VkDeviceSize>(val: size, alignment: sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
11335 alignment = VmaAlignUp<VkDeviceSize>(val: alignment, alignment: sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
11336 }
11337
11338 {
11339 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
11340 for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
11341 {
11342 res = AllocatePage(
11343 size,
11344 alignment,
11345 createInfo,
11346 suballocType,
11347 pAllocation: pAllocations + allocIndex);
11348 if (res != VK_SUCCESS)
11349 {
11350 break;
11351 }
11352 }
11353 }
11354
11355 if (res != VK_SUCCESS)
11356 {
11357 // Free all already created allocations.
11358 while (allocIndex--)
11359 Free(hAllocation: pAllocations[allocIndex]);
11360 memset(s: pAllocations, c: 0, n: sizeof(VmaAllocation) * allocationCount);
11361 }
11362
11363 return res;
11364}
11365
11366VkResult VmaBlockVector::AllocatePage(
11367 VkDeviceSize size,
11368 VkDeviceSize alignment,
11369 const VmaAllocationCreateInfo& createInfo,
11370 VmaSuballocationType suballocType,
11371 VmaAllocation* pAllocation)
11372{
11373 const bool isUpperAddress = (createInfo.flags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
11374
11375 VkDeviceSize freeMemory;
11376 {
11377 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex: m_MemoryTypeIndex);
11378 VmaBudget heapBudget = {};
11379 m_hAllocator->GetHeapBudgets(outBudgets: &heapBudget, firstHeap: heapIndex, heapCount: 1);
11380 freeMemory = (heapBudget.usage < heapBudget.budget) ? (heapBudget.budget - heapBudget.usage) : 0;
11381 }
11382
11383 const bool canFallbackToDedicated = !HasExplicitBlockSize() &&
11384 (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0;
11385 const bool canCreateNewBlock =
11386 ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
11387 (m_Blocks.size() < m_MaxBlockCount) &&
11388 (freeMemory >= size || !canFallbackToDedicated);
11389 uint32_t strategy = createInfo.flags & VMA_ALLOCATION_CREATE_STRATEGY_MASK;
11390
11391 // Upper address can only be used with linear allocator and within single memory block.
11392 if (isUpperAddress &&
11393 (m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT || m_MaxBlockCount > 1))
11394 {
11395 return VK_ERROR_FEATURE_NOT_PRESENT;
11396 }
11397
11398 // Early reject: requested allocation size is larger that maximum block size for this block vector.
11399 if (size + VMA_DEBUG_MARGIN > m_PreferredBlockSize)
11400 {
11401 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
11402 }
11403
11404 // 1. Search existing allocations. Try to allocate.
11405 if (m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
11406 {
11407 // Use only last block.
11408 if (!m_Blocks.empty())
11409 {
11410 VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks.back();
11411 VMA_ASSERT(pCurrBlock);
11412 VkResult res = AllocateFromBlock(
11413 pBlock: pCurrBlock, size, alignment, allocFlags: createInfo.flags, pUserData: createInfo.pUserData, suballocType, strategy, pAllocation);
11414 if (res == VK_SUCCESS)
11415 {
11416 VMA_DEBUG_LOG_FORMAT(" Returned from last block #%" PRIu32, pCurrBlock->GetId());
11417 IncrementallySortBlocks();
11418 return VK_SUCCESS;
11419 }
11420 }
11421 }
11422 else
11423 {
11424 if (strategy != VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT) // MIN_MEMORY or default
11425 {
11426 const bool isHostVisible =
11427 (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0;
11428 if(isHostVisible)
11429 {
11430 const bool isMappingAllowed = (createInfo.flags &
11431 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
11432 /*
11433 For non-mappable allocations, check blocks that are not mapped first.
11434 For mappable allocations, check blocks that are already mapped first.
11435 This way, having many blocks, we will separate mappable and non-mappable allocations,
11436 hopefully limiting the number of blocks that are mapped, which will help tools like RenderDoc.
11437 */
11438 for(size_t mappingI = 0; mappingI < 2; ++mappingI)
11439 {
11440 // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
11441 for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
11442 {
11443 VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
11444 VMA_ASSERT(pCurrBlock);
11445 const bool isBlockMapped = pCurrBlock->GetMappedData() != VMA_NULL;
11446 if((mappingI == 0) == (isMappingAllowed == isBlockMapped))
11447 {
11448 VkResult res = AllocateFromBlock(
11449 pBlock: pCurrBlock, size, alignment, allocFlags: createInfo.flags, pUserData: createInfo.pUserData, suballocType, strategy, pAllocation);
11450 if (res == VK_SUCCESS)
11451 {
11452 VMA_DEBUG_LOG_FORMAT(" Returned from existing block #%" PRIu32, pCurrBlock->GetId());
11453 IncrementallySortBlocks();
11454 return VK_SUCCESS;
11455 }
11456 }
11457 }
11458 }
11459 }
11460 else
11461 {
11462 // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
11463 for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
11464 {
11465 VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
11466 VMA_ASSERT(pCurrBlock);
11467 VkResult res = AllocateFromBlock(
11468 pBlock: pCurrBlock, size, alignment, allocFlags: createInfo.flags, pUserData: createInfo.pUserData, suballocType, strategy, pAllocation);
11469 if (res == VK_SUCCESS)
11470 {
11471 VMA_DEBUG_LOG_FORMAT(" Returned from existing block #%" PRIu32, pCurrBlock->GetId());
11472 IncrementallySortBlocks();
11473 return VK_SUCCESS;
11474 }
11475 }
11476 }
11477 }
11478 else // VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT
11479 {
11480 // Backward order in m_Blocks - prefer blocks with largest amount of free space.
11481 for (size_t blockIndex = m_Blocks.size(); blockIndex--; )
11482 {
11483 VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
11484 VMA_ASSERT(pCurrBlock);
11485 VkResult res = AllocateFromBlock(pBlock: pCurrBlock, size, alignment, allocFlags: createInfo.flags, pUserData: createInfo.pUserData, suballocType, strategy, pAllocation);
11486 if (res == VK_SUCCESS)
11487 {
11488 VMA_DEBUG_LOG_FORMAT(" Returned from existing block #%" PRIu32, pCurrBlock->GetId());
11489 IncrementallySortBlocks();
11490 return VK_SUCCESS;
11491 }
11492 }
11493 }
11494 }
11495
11496 // 2. Try to create new block.
11497 if (canCreateNewBlock)
11498 {
11499 // Calculate optimal size for new block.
11500 VkDeviceSize newBlockSize = m_PreferredBlockSize;
11501 uint32_t newBlockSizeShift = 0;
11502 const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
11503
11504 if (!m_ExplicitBlockSize)
11505 {
11506 // Allocate 1/8, 1/4, 1/2 as first blocks.
11507 const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
11508 for (uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
11509 {
11510 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
11511 if (smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
11512 {
11513 newBlockSize = smallerNewBlockSize;
11514 ++newBlockSizeShift;
11515 }
11516 else
11517 {
11518 break;
11519 }
11520 }
11521 }
11522
11523 size_t newBlockIndex = 0;
11524 VkResult res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
11525 CreateBlock(blockSize: newBlockSize, pNewBlockIndex: &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
11526 // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
11527 if (!m_ExplicitBlockSize)
11528 {
11529 while (res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
11530 {
11531 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
11532 if (smallerNewBlockSize >= size)
11533 {
11534 newBlockSize = smallerNewBlockSize;
11535 ++newBlockSizeShift;
11536 res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
11537 CreateBlock(blockSize: newBlockSize, pNewBlockIndex: &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
11538 }
11539 else
11540 {
11541 break;
11542 }
11543 }
11544 }
11545
11546 if (res == VK_SUCCESS)
11547 {
11548 VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
11549 VMA_ASSERT(pBlock->m_pMetadata->GetSize() >= size);
11550
11551 res = AllocateFromBlock(
11552 pBlock, size, alignment, allocFlags: createInfo.flags, pUserData: createInfo.pUserData, suballocType, strategy, pAllocation);
11553 if (res == VK_SUCCESS)
11554 {
11555 VMA_DEBUG_LOG_FORMAT(" Created new block #%" PRIu32 " Size=%" PRIu64, pBlock->GetId(), newBlockSize);
11556 IncrementallySortBlocks();
11557 return VK_SUCCESS;
11558 }
11559 else
11560 {
11561 // Allocation from new block failed, possibly due to VMA_DEBUG_MARGIN or alignment.
11562 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
11563 }
11564 }
11565 }
11566
11567 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
11568}
11569
11570void VmaBlockVector::Free(const VmaAllocation hAllocation)
11571{
11572 VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
11573
11574 bool budgetExceeded = false;
11575 {
11576 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex: m_MemoryTypeIndex);
11577 VmaBudget heapBudget = {};
11578 m_hAllocator->GetHeapBudgets(outBudgets: &heapBudget, firstHeap: heapIndex, heapCount: 1);
11579 budgetExceeded = heapBudget.usage >= heapBudget.budget;
11580 }
11581
11582 // Scope for lock.
11583 {
11584 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
11585
11586 VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
11587
11588 if (IsCorruptionDetectionEnabled())
11589 {
11590 VkResult res = pBlock->ValidateMagicValueAfterAllocation(hAllocator: m_hAllocator, allocOffset: hAllocation->GetOffset(), allocSize: hAllocation->GetSize());
11591 VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to validate magic value.");
11592 }
11593
11594 if (hAllocation->IsPersistentMap())
11595 {
11596 pBlock->Unmap(hAllocator: m_hAllocator, count: 1);
11597 }
11598
11599 const bool hadEmptyBlockBeforeFree = HasEmptyBlock();
11600 pBlock->m_pMetadata->Free(allocHandle: hAllocation->GetAllocHandle());
11601 pBlock->PostFree(hAllocator: m_hAllocator);
11602 VMA_HEAVY_ASSERT(pBlock->Validate());
11603
11604 VMA_DEBUG_LOG_FORMAT(" Freed from MemoryTypeIndex=%" PRIu32, m_MemoryTypeIndex);
11605
11606 const bool canDeleteBlock = m_Blocks.size() > m_MinBlockCount;
11607 // pBlock became empty after this deallocation.
11608 if (pBlock->m_pMetadata->IsEmpty())
11609 {
11610 // Already had empty block. We don't want to have two, so delete this one.
11611 if ((hadEmptyBlockBeforeFree || budgetExceeded) && canDeleteBlock)
11612 {
11613 pBlockToDelete = pBlock;
11614 Remove(pBlock);
11615 }
11616 // else: We now have one empty block - leave it. A hysteresis to avoid allocating whole block back and forth.
11617 }
11618 // pBlock didn't become empty, but we have another empty block - find and free that one.
11619 // (This is optional, heuristics.)
11620 else if (hadEmptyBlockBeforeFree && canDeleteBlock)
11621 {
11622 VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
11623 if (pLastBlock->m_pMetadata->IsEmpty())
11624 {
11625 pBlockToDelete = pLastBlock;
11626 m_Blocks.pop_back();
11627 }
11628 }
11629
11630 IncrementallySortBlocks();
11631
11632 m_hAllocator->m_Budget.RemoveAllocation(heapIndex: m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex: m_MemoryTypeIndex), allocationSize: hAllocation->GetSize());
11633 hAllocation->Destroy(allocator: m_hAllocator);
11634 m_hAllocator->m_AllocationObjectAllocator.Free(hAlloc: hAllocation);
11635 }
11636
11637 // Destruction of a free block. Deferred until this point, outside of mutex
11638 // lock, for performance reason.
11639 if (pBlockToDelete != VMA_NULL)
11640 {
11641 VMA_DEBUG_LOG_FORMAT(" Deleted empty block #%" PRIu32, pBlockToDelete->GetId());
11642 pBlockToDelete->Destroy(allocator: m_hAllocator);
11643 vma_delete(hAllocator: m_hAllocator, ptr: pBlockToDelete);
11644 }
11645}
11646
11647VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
11648{
11649 VkDeviceSize result = 0;
11650 for (size_t i = m_Blocks.size(); i--; )
11651 {
11652 result = VMA_MAX(result, m_Blocks[i]->m_pMetadata->GetSize());
11653 if (result >= m_PreferredBlockSize)
11654 {
11655 break;
11656 }
11657 }
11658 return result;
11659}
11660
11661void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
11662{
11663 for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
11664 {
11665 if (m_Blocks[blockIndex] == pBlock)
11666 {
11667 VmaVectorRemove(vec&: m_Blocks, index: blockIndex);
11668 return;
11669 }
11670 }
11671 VMA_ASSERT(0);
11672}
11673
11674void VmaBlockVector::IncrementallySortBlocks()
11675{
11676 if (!m_IncrementalSort)
11677 return;
11678 if (m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
11679 {
11680 // Bubble sort only until first swap.
11681 for (size_t i = 1; i < m_Blocks.size(); ++i)
11682 {
11683 if (m_Blocks[i - 1]->m_pMetadata->GetSumFreeSize() > m_Blocks[i]->m_pMetadata->GetSumFreeSize())
11684 {
11685 std::swap(a&: m_Blocks[i - 1], b&: m_Blocks[i]);
11686 return;
11687 }
11688 }
11689 }
11690}
11691
11692void VmaBlockVector::SortByFreeSize()
11693{
11694 VMA_SORT(m_Blocks.begin(), m_Blocks.end(),
11695 [](VmaDeviceMemoryBlock* b1, VmaDeviceMemoryBlock* b2) -> bool
11696 {
11697 return b1->m_pMetadata->GetSumFreeSize() < b2->m_pMetadata->GetSumFreeSize();
11698 });
11699}
11700
11701VkResult VmaBlockVector::AllocateFromBlock(
11702 VmaDeviceMemoryBlock* pBlock,
11703 VkDeviceSize size,
11704 VkDeviceSize alignment,
11705 VmaAllocationCreateFlags allocFlags,
11706 void* pUserData,
11707 VmaSuballocationType suballocType,
11708 uint32_t strategy,
11709 VmaAllocation* pAllocation)
11710{
11711 const bool isUpperAddress = (allocFlags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
11712
11713 VmaAllocationRequest currRequest = {};
11714 if (pBlock->m_pMetadata->CreateAllocationRequest(
11715 allocSize: size,
11716 allocAlignment: alignment,
11717 upperAddress: isUpperAddress,
11718 allocType: suballocType,
11719 strategy,
11720 pAllocationRequest: &currRequest))
11721 {
11722 return CommitAllocationRequest(allocRequest&: currRequest, pBlock, alignment, allocFlags, pUserData, suballocType, pAllocation);
11723 }
11724 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
11725}
11726
11727VkResult VmaBlockVector::CommitAllocationRequest(
11728 VmaAllocationRequest& allocRequest,
11729 VmaDeviceMemoryBlock* pBlock,
11730 VkDeviceSize alignment,
11731 VmaAllocationCreateFlags allocFlags,
11732 void* pUserData,
11733 VmaSuballocationType suballocType,
11734 VmaAllocation* pAllocation)
11735{
11736 const bool mapped = (allocFlags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
11737 const bool isUserDataString = (allocFlags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
11738 const bool isMappingAllowed = (allocFlags &
11739 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
11740
11741 pBlock->PostAlloc(hAllocator: m_hAllocator);
11742 // Allocate from pCurrBlock.
11743 if (mapped)
11744 {
11745 VkResult res = pBlock->Map(hAllocator: m_hAllocator, count: 1, VMA_NULL);
11746 if (res != VK_SUCCESS)
11747 {
11748 return res;
11749 }
11750 }
11751
11752 *pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(args: isMappingAllowed);
11753 pBlock->m_pMetadata->Alloc(request: allocRequest, type: suballocType, userData: *pAllocation);
11754 (*pAllocation)->InitBlockAllocation(
11755 block: pBlock,
11756 allocHandle: allocRequest.allocHandle,
11757 alignment,
11758 size: allocRequest.size, // Not size, as actual allocation size may be larger than requested!
11759 memoryTypeIndex: m_MemoryTypeIndex,
11760 suballocationType: suballocType,
11761 mapped);
11762 VMA_HEAVY_ASSERT(pBlock->Validate());
11763 if (isUserDataString)
11764 (*pAllocation)->SetName(hAllocator: m_hAllocator, pName: (const char*)pUserData);
11765 else
11766 (*pAllocation)->SetUserData(hAllocator: m_hAllocator, pUserData);
11767 m_hAllocator->m_Budget.AddAllocation(heapIndex: m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex: m_MemoryTypeIndex), allocationSize: allocRequest.size);
11768 if (VMA_DEBUG_INITIALIZE_ALLOCATIONS)
11769 {
11770 m_hAllocator->FillAllocation(hAllocation: *pAllocation, pattern: VMA_ALLOCATION_FILL_PATTERN_CREATED);
11771 }
11772 if (IsCorruptionDetectionEnabled())
11773 {
11774 VkResult res = pBlock->WriteMagicValueAfterAllocation(hAllocator: m_hAllocator, allocOffset: (*pAllocation)->GetOffset(), allocSize: allocRequest.size);
11775 VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
11776 }
11777 return VK_SUCCESS;
11778}
11779
11780VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
11781{
11782 VkMemoryAllocateInfo allocInfo = { .sType: VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
11783 allocInfo.pNext = m_pMemoryAllocateNext;
11784 allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
11785 allocInfo.allocationSize = blockSize;
11786
11787#if VMA_BUFFER_DEVICE_ADDRESS
11788 // Every standalone block can potentially contain a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT - always enable the feature.
11789 VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { .sType: VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
11790 if (m_hAllocator->m_UseKhrBufferDeviceAddress)
11791 {
11792 allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
11793 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &allocFlagsInfo);
11794 }
11795#endif // VMA_BUFFER_DEVICE_ADDRESS
11796
11797#if VMA_MEMORY_PRIORITY
11798 VkMemoryPriorityAllocateInfoEXT priorityInfo = { .sType: VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
11799 if (m_hAllocator->m_UseExtMemoryPriority)
11800 {
11801 VMA_ASSERT(m_Priority >= 0.f && m_Priority <= 1.f);
11802 priorityInfo.priority = m_Priority;
11803 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &priorityInfo);
11804 }
11805#endif // VMA_MEMORY_PRIORITY
11806
11807#if VMA_EXTERNAL_MEMORY
11808 // Attach VkExportMemoryAllocateInfoKHR if necessary.
11809 VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { .sType: VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
11810 exportMemoryAllocInfo.handleTypes = m_hAllocator->GetExternalMemoryHandleTypeFlags(memTypeIndex: m_MemoryTypeIndex);
11811 if (exportMemoryAllocInfo.handleTypes != 0)
11812 {
11813 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &exportMemoryAllocInfo);
11814 }
11815#endif // VMA_EXTERNAL_MEMORY
11816
11817 VkDeviceMemory mem = VK_NULL_HANDLE;
11818 VkResult res = m_hAllocator->AllocateVulkanMemory(pAllocateInfo: &allocInfo, pMemory: &mem);
11819 if (res < 0)
11820 {
11821 return res;
11822 }
11823
11824 // New VkDeviceMemory successfully created.
11825
11826 // Create new Allocation for it.
11827 VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
11828 pBlock->Init(
11829 hAllocator: m_hAllocator,
11830 hParentPool: m_hParentPool,
11831 newMemoryTypeIndex: m_MemoryTypeIndex,
11832 newMemory: mem,
11833 newSize: allocInfo.allocationSize,
11834 id: m_NextBlockId++,
11835 algorithm: m_Algorithm,
11836 bufferImageGranularity: m_BufferImageGranularity);
11837
11838 m_Blocks.push_back(src: pBlock);
11839 if (pNewBlockIndex != VMA_NULL)
11840 {
11841 *pNewBlockIndex = m_Blocks.size() - 1;
11842 }
11843
11844 return VK_SUCCESS;
11845}
11846
11847bool VmaBlockVector::HasEmptyBlock()
11848{
11849 for (size_t index = 0, count = m_Blocks.size(); index < count; ++index)
11850 {
11851 VmaDeviceMemoryBlock* const pBlock = m_Blocks[index];
11852 if (pBlock->m_pMetadata->IsEmpty())
11853 {
11854 return true;
11855 }
11856 }
11857 return false;
11858}
11859
11860#if VMA_STATS_STRING_ENABLED
11861void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
11862{
11863 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
11864
11865
11866 json.BeginObject();
11867 for (size_t i = 0; i < m_Blocks.size(); ++i)
11868 {
11869 json.BeginString();
11870 json.ContinueString(n: m_Blocks[i]->GetId());
11871 json.EndString();
11872
11873 json.BeginObject();
11874 json.WriteString(pStr: "MapRefCount");
11875 json.WriteNumber(n: m_Blocks[i]->GetMapRefCount());
11876
11877 m_Blocks[i]->m_pMetadata->PrintDetailedMap(json);
11878 json.EndObject();
11879 }
11880 json.EndObject();
11881}
11882#endif // VMA_STATS_STRING_ENABLED
11883
11884VkResult VmaBlockVector::CheckCorruption()
11885{
11886 if (!IsCorruptionDetectionEnabled())
11887 {
11888 return VK_ERROR_FEATURE_NOT_PRESENT;
11889 }
11890
11891 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
11892 for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
11893 {
11894 VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
11895 VMA_ASSERT(pBlock);
11896 VkResult res = pBlock->CheckCorruption(hAllocator: m_hAllocator);
11897 if (res != VK_SUCCESS)
11898 {
11899 return res;
11900 }
11901 }
11902 return VK_SUCCESS;
11903}
11904
11905#endif // _VMA_BLOCK_VECTOR_FUNCTIONS
11906
11907#ifndef _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
11908VmaDefragmentationContext_T::VmaDefragmentationContext_T(
11909 VmaAllocator hAllocator,
11910 const VmaDefragmentationInfo& info)
11911 : m_MaxPassBytes(info.maxBytesPerPass == 0 ? VK_WHOLE_SIZE : info.maxBytesPerPass),
11912 m_MaxPassAllocations(info.maxAllocationsPerPass == 0 ? UINT32_MAX : info.maxAllocationsPerPass),
11913 m_BreakCallback(info.pfnBreakCallback),
11914 m_BreakCallbackUserData(info.pBreakCallbackUserData),
11915 m_MoveAllocator(hAllocator->GetAllocationCallbacks()),
11916 m_Moves(m_MoveAllocator)
11917{
11918 m_Algorithm = info.flags & VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK;
11919
11920 if (info.pool != VMA_NULL)
11921 {
11922 m_BlockVectorCount = 1;
11923 m_PoolBlockVector = &info.pool->m_BlockVector;
11924 m_pBlockVectors = &m_PoolBlockVector;
11925 m_PoolBlockVector->SetIncrementalSort(false);
11926 m_PoolBlockVector->SortByFreeSize();
11927 }
11928 else
11929 {
11930 m_BlockVectorCount = hAllocator->GetMemoryTypeCount();
11931 m_PoolBlockVector = VMA_NULL;
11932 m_pBlockVectors = hAllocator->m_pBlockVectors;
11933 for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
11934 {
11935 VmaBlockVector* vector = m_pBlockVectors[i];
11936 if (vector != VMA_NULL)
11937 {
11938 vector->SetIncrementalSort(false);
11939 vector->SortByFreeSize();
11940 }
11941 }
11942 }
11943
11944 switch (m_Algorithm)
11945 {
11946 case 0: // Default algorithm
11947 m_Algorithm = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT;
11948 m_AlgorithmState = vma_new_array(hAllocator, StateBalanced, m_BlockVectorCount);
11949 break;
11950 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
11951 m_AlgorithmState = vma_new_array(hAllocator, StateBalanced, m_BlockVectorCount);
11952 break;
11953 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
11954 if (hAllocator->GetBufferImageGranularity() > 1)
11955 {
11956 m_AlgorithmState = vma_new_array(hAllocator, StateExtensive, m_BlockVectorCount);
11957 }
11958 break;
11959 }
11960}
11961
11962VmaDefragmentationContext_T::~VmaDefragmentationContext_T()
11963{
11964 if (m_PoolBlockVector != VMA_NULL)
11965 {
11966 m_PoolBlockVector->SetIncrementalSort(true);
11967 }
11968 else
11969 {
11970 for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
11971 {
11972 VmaBlockVector* vector = m_pBlockVectors[i];
11973 if (vector != VMA_NULL)
11974 vector->SetIncrementalSort(true);
11975 }
11976 }
11977
11978 if (m_AlgorithmState)
11979 {
11980 switch (m_Algorithm)
11981 {
11982 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
11983 vma_delete_array(pAllocationCallbacks: m_MoveAllocator.m_pCallbacks, ptr: reinterpret_cast<StateBalanced*>(m_AlgorithmState), count: m_BlockVectorCount);
11984 break;
11985 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
11986 vma_delete_array(pAllocationCallbacks: m_MoveAllocator.m_pCallbacks, ptr: reinterpret_cast<StateExtensive*>(m_AlgorithmState), count: m_BlockVectorCount);
11987 break;
11988 default:
11989 VMA_ASSERT(0);
11990 }
11991 }
11992}
11993
11994VkResult VmaDefragmentationContext_T::DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo)
11995{
11996 if (m_PoolBlockVector != VMA_NULL)
11997 {
11998 VmaMutexLockWrite lock(m_PoolBlockVector->GetMutex(), m_PoolBlockVector->GetAllocator()->m_UseMutex);
11999
12000 if (m_PoolBlockVector->GetBlockCount() > 1)
12001 ComputeDefragmentation(vector&: *m_PoolBlockVector, index: 0);
12002 else if (m_PoolBlockVector->GetBlockCount() == 1)
12003 ReallocWithinBlock(vector&: *m_PoolBlockVector, block: m_PoolBlockVector->GetBlock(index: 0));
12004 }
12005 else
12006 {
12007 for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
12008 {
12009 if (m_pBlockVectors[i] != VMA_NULL)
12010 {
12011 VmaMutexLockWrite lock(m_pBlockVectors[i]->GetMutex(), m_pBlockVectors[i]->GetAllocator()->m_UseMutex);
12012
12013 if (m_pBlockVectors[i]->GetBlockCount() > 1)
12014 {
12015 if (ComputeDefragmentation(vector&: *m_pBlockVectors[i], index: i))
12016 break;
12017 }
12018 else if (m_pBlockVectors[i]->GetBlockCount() == 1)
12019 {
12020 if (ReallocWithinBlock(vector&: *m_pBlockVectors[i], block: m_pBlockVectors[i]->GetBlock(index: 0)))
12021 break;
12022 }
12023 }
12024 }
12025 }
12026
12027 moveInfo.moveCount = static_cast<uint32_t>(m_Moves.size());
12028 if (moveInfo.moveCount > 0)
12029 {
12030 moveInfo.pMoves = m_Moves.data();
12031 return VK_INCOMPLETE;
12032 }
12033
12034 moveInfo.pMoves = VMA_NULL;
12035 return VK_SUCCESS;
12036}
12037
12038VkResult VmaDefragmentationContext_T::DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo)
12039{
12040 VMA_ASSERT(moveInfo.moveCount > 0 ? moveInfo.pMoves != VMA_NULL : true);
12041
12042 VkResult result = VK_SUCCESS;
12043 VmaStlAllocator<FragmentedBlock> blockAllocator(m_MoveAllocator.m_pCallbacks);
12044 VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> immovableBlocks(blockAllocator);
12045 VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> mappedBlocks(blockAllocator);
12046
12047 VmaAllocator allocator = VMA_NULL;
12048 for (uint32_t i = 0; i < moveInfo.moveCount; ++i)
12049 {
12050 VmaDefragmentationMove& move = moveInfo.pMoves[i];
12051 size_t prevCount = 0, currentCount = 0;
12052 VkDeviceSize freedBlockSize = 0;
12053
12054 uint32_t vectorIndex;
12055 VmaBlockVector* vector;
12056 if (m_PoolBlockVector != VMA_NULL)
12057 {
12058 vectorIndex = 0;
12059 vector = m_PoolBlockVector;
12060 }
12061 else
12062 {
12063 vectorIndex = move.srcAllocation->GetMemoryTypeIndex();
12064 vector = m_pBlockVectors[vectorIndex];
12065 VMA_ASSERT(vector != VMA_NULL);
12066 }
12067
12068 switch (move.operation)
12069 {
12070 case VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY:
12071 {
12072 uint8_t mapCount = move.srcAllocation->SwapBlockAllocation(hAllocator: vector->m_hAllocator, allocation: move.dstTmpAllocation);
12073 if (mapCount > 0)
12074 {
12075 allocator = vector->m_hAllocator;
12076 VmaDeviceMemoryBlock* newMapBlock = move.srcAllocation->GetBlock();
12077 bool notPresent = true;
12078 for (FragmentedBlock& block : mappedBlocks)
12079 {
12080 if (block.block == newMapBlock)
12081 {
12082 notPresent = false;
12083 block.data += mapCount;
12084 break;
12085 }
12086 }
12087 if (notPresent)
12088 mappedBlocks.push_back(src: { .data: mapCount, .block: newMapBlock });
12089 }
12090
12091 // Scope for locks, Free have it's own lock
12092 {
12093 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12094 prevCount = vector->GetBlockCount();
12095 freedBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
12096 }
12097 vector->Free(hAllocation: move.dstTmpAllocation);
12098 {
12099 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12100 currentCount = vector->GetBlockCount();
12101 }
12102
12103 result = VK_INCOMPLETE;
12104 break;
12105 }
12106 case VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE:
12107 {
12108 m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
12109 --m_PassStats.allocationsMoved;
12110 vector->Free(hAllocation: move.dstTmpAllocation);
12111
12112 VmaDeviceMemoryBlock* newBlock = move.srcAllocation->GetBlock();
12113 bool notPresent = true;
12114 for (const FragmentedBlock& block : immovableBlocks)
12115 {
12116 if (block.block == newBlock)
12117 {
12118 notPresent = false;
12119 break;
12120 }
12121 }
12122 if (notPresent)
12123 immovableBlocks.push_back(src: { .data: vectorIndex, .block: newBlock });
12124 break;
12125 }
12126 case VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY:
12127 {
12128 m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
12129 --m_PassStats.allocationsMoved;
12130 // Scope for locks, Free have it's own lock
12131 {
12132 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12133 prevCount = vector->GetBlockCount();
12134 freedBlockSize = move.srcAllocation->GetBlock()->m_pMetadata->GetSize();
12135 }
12136 vector->Free(hAllocation: move.srcAllocation);
12137 {
12138 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12139 currentCount = vector->GetBlockCount();
12140 }
12141 freedBlockSize *= prevCount - currentCount;
12142
12143 VkDeviceSize dstBlockSize;
12144 {
12145 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12146 dstBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
12147 }
12148 vector->Free(hAllocation: move.dstTmpAllocation);
12149 {
12150 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12151 freedBlockSize += dstBlockSize * (currentCount - vector->GetBlockCount());
12152 currentCount = vector->GetBlockCount();
12153 }
12154
12155 result = VK_INCOMPLETE;
12156 break;
12157 }
12158 default:
12159 VMA_ASSERT(0);
12160 }
12161
12162 if (prevCount > currentCount)
12163 {
12164 size_t freedBlocks = prevCount - currentCount;
12165 m_PassStats.deviceMemoryBlocksFreed += static_cast<uint32_t>(freedBlocks);
12166 m_PassStats.bytesFreed += freedBlockSize;
12167 }
12168
12169 if(m_Algorithm == VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT &&
12170 m_AlgorithmState != VMA_NULL)
12171 {
12172 // Avoid unnecessary tries to allocate when new free block is available
12173 StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[vectorIndex];
12174 if (state.firstFreeBlock != SIZE_MAX)
12175 {
12176 const size_t diff = prevCount - currentCount;
12177 if (state.firstFreeBlock >= diff)
12178 {
12179 state.firstFreeBlock -= diff;
12180 if (state.firstFreeBlock != 0)
12181 state.firstFreeBlock -= vector->GetBlock(index: state.firstFreeBlock - 1)->m_pMetadata->IsEmpty();
12182 }
12183 else
12184 state.firstFreeBlock = 0;
12185 }
12186 }
12187 }
12188 moveInfo.moveCount = 0;
12189 moveInfo.pMoves = VMA_NULL;
12190 m_Moves.clear();
12191
12192 // Update stats
12193 m_GlobalStats.allocationsMoved += m_PassStats.allocationsMoved;
12194 m_GlobalStats.bytesFreed += m_PassStats.bytesFreed;
12195 m_GlobalStats.bytesMoved += m_PassStats.bytesMoved;
12196 m_GlobalStats.deviceMemoryBlocksFreed += m_PassStats.deviceMemoryBlocksFreed;
12197 m_PassStats = { .bytesMoved: 0 };
12198
12199 // Move blocks with immovable allocations according to algorithm
12200 if (immovableBlocks.size() > 0)
12201 {
12202 do
12203 {
12204 if(m_Algorithm == VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT)
12205 {
12206 if (m_AlgorithmState != VMA_NULL)
12207 {
12208 bool swapped = false;
12209 // Move to the start of free blocks range
12210 for (const FragmentedBlock& block : immovableBlocks)
12211 {
12212 StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[block.data];
12213 if (state.operation != StateExtensive::Operation::Cleanup)
12214 {
12215 VmaBlockVector* vector = m_pBlockVectors[block.data];
12216 VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12217
12218 for (size_t i = 0, count = vector->GetBlockCount() - m_ImmovableBlockCount; i < count; ++i)
12219 {
12220 if (vector->GetBlock(index: i) == block.block)
12221 {
12222 std::swap(a&: vector->m_Blocks[i], b&: vector->m_Blocks[vector->GetBlockCount() - ++m_ImmovableBlockCount]);
12223 if (state.firstFreeBlock != SIZE_MAX)
12224 {
12225 if (i + 1 < state.firstFreeBlock)
12226 {
12227 if (state.firstFreeBlock > 1)
12228 std::swap(a&: vector->m_Blocks[i], b&: vector->m_Blocks[--state.firstFreeBlock]);
12229 else
12230 --state.firstFreeBlock;
12231 }
12232 }
12233 swapped = true;
12234 break;
12235 }
12236 }
12237 }
12238 }
12239 if (swapped)
12240 result = VK_INCOMPLETE;
12241 break;
12242 }
12243 }
12244
12245 // Move to the beginning
12246 for (const FragmentedBlock& block : immovableBlocks)
12247 {
12248 VmaBlockVector* vector = m_pBlockVectors[block.data];
12249 VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
12250
12251 for (size_t i = m_ImmovableBlockCount; i < vector->GetBlockCount(); ++i)
12252 {
12253 if (vector->GetBlock(index: i) == block.block)
12254 {
12255 std::swap(a&: vector->m_Blocks[i], b&: vector->m_Blocks[m_ImmovableBlockCount++]);
12256 break;
12257 }
12258 }
12259 }
12260 } while (false);
12261 }
12262
12263 // Bulk-map destination blocks
12264 for (const FragmentedBlock& block : mappedBlocks)
12265 {
12266 VkResult res = block.block->Map(hAllocator: allocator, count: block.data, VMA_NULL);
12267 VMA_ASSERT(res == VK_SUCCESS);
12268 }
12269 return result;
12270}
12271
12272bool VmaDefragmentationContext_T::ComputeDefragmentation(VmaBlockVector& vector, size_t index)
12273{
12274 switch (m_Algorithm)
12275 {
12276 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT:
12277 return ComputeDefragmentation_Fast(vector);
12278 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
12279 return ComputeDefragmentation_Balanced(vector, index, update: true);
12280 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT:
12281 return ComputeDefragmentation_Full(vector);
12282 case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
12283 return ComputeDefragmentation_Extensive(vector, index);
12284 default:
12285 VMA_ASSERT(0);
12286 return ComputeDefragmentation_Balanced(vector, index, update: true);
12287 }
12288}
12289
12290VmaDefragmentationContext_T::MoveAllocationData VmaDefragmentationContext_T::GetMoveData(
12291 VmaAllocHandle handle, VmaBlockMetadata* metadata)
12292{
12293 MoveAllocationData moveData;
12294 moveData.move.srcAllocation = (VmaAllocation)metadata->GetAllocationUserData(allocHandle: handle);
12295 moveData.size = moveData.move.srcAllocation->GetSize();
12296 moveData.alignment = moveData.move.srcAllocation->GetAlignment();
12297 moveData.type = moveData.move.srcAllocation->GetSuballocationType();
12298 moveData.flags = 0;
12299
12300 if (moveData.move.srcAllocation->IsPersistentMap())
12301 moveData.flags |= VMA_ALLOCATION_CREATE_MAPPED_BIT;
12302 if (moveData.move.srcAllocation->IsMappingAllowed())
12303 moveData.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
12304
12305 return moveData;
12306}
12307
12308VmaDefragmentationContext_T::CounterStatus VmaDefragmentationContext_T::CheckCounters(VkDeviceSize bytes)
12309{
12310 // Check custom criteria if exists
12311 if (m_BreakCallback && m_BreakCallback(m_BreakCallbackUserData))
12312 return CounterStatus::End;
12313
12314 // Ignore allocation if will exceed max size for copy
12315 if (m_PassStats.bytesMoved + bytes > m_MaxPassBytes)
12316 {
12317 if (++m_IgnoredAllocs < MAX_ALLOCS_TO_IGNORE)
12318 return CounterStatus::Ignore;
12319 else
12320 return CounterStatus::End;
12321 }
12322 else
12323 m_IgnoredAllocs = 0;
12324 return CounterStatus::Pass;
12325}
12326
12327bool VmaDefragmentationContext_T::IncrementCounters(VkDeviceSize bytes)
12328{
12329 m_PassStats.bytesMoved += bytes;
12330 // Early return when max found
12331 if (++m_PassStats.allocationsMoved >= m_MaxPassAllocations || m_PassStats.bytesMoved >= m_MaxPassBytes)
12332 {
12333 VMA_ASSERT((m_PassStats.allocationsMoved == m_MaxPassAllocations ||
12334 m_PassStats.bytesMoved == m_MaxPassBytes) && "Exceeded maximal pass threshold!");
12335 return true;
12336 }
12337 return false;
12338}
12339
12340bool VmaDefragmentationContext_T::ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block)
12341{
12342 VmaBlockMetadata* metadata = block->m_pMetadata;
12343
12344 for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
12345 handle != VK_NULL_HANDLE;
12346 handle = metadata->GetNextAllocation(prevAlloc: handle))
12347 {
12348 MoveAllocationData moveData = GetMoveData(handle, metadata);
12349 // Ignore newly created allocations by defragmentation algorithm
12350 if (moveData.move.srcAllocation->GetUserData() == this)
12351 continue;
12352 switch (CheckCounters(bytes: moveData.move.srcAllocation->GetSize()))
12353 {
12354 case CounterStatus::Ignore:
12355 continue;
12356 case CounterStatus::End:
12357 return true;
12358 case CounterStatus::Pass:
12359 break;
12360 default:
12361 VMA_ASSERT(0);
12362 }
12363
12364 VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
12365 if (offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
12366 {
12367 VmaAllocationRequest request = {};
12368 if (metadata->CreateAllocationRequest(
12369 allocSize: moveData.size,
12370 allocAlignment: moveData.alignment,
12371 upperAddress: false,
12372 allocType: moveData.type,
12373 strategy: VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
12374 pAllocationRequest: &request))
12375 {
12376 if (metadata->GetAllocationOffset(allocHandle: request.allocHandle) < offset)
12377 {
12378 if (vector.CommitAllocationRequest(
12379 allocRequest&: request,
12380 pBlock: block,
12381 alignment: moveData.alignment,
12382 allocFlags: moveData.flags,
12383 pUserData: this,
12384 suballocType: moveData.type,
12385 pAllocation: &moveData.move.dstTmpAllocation) == VK_SUCCESS)
12386 {
12387 m_Moves.push_back(src: moveData.move);
12388 if (IncrementCounters(bytes: moveData.size))
12389 return true;
12390 }
12391 }
12392 }
12393 }
12394 }
12395 return false;
12396}
12397
12398bool VmaDefragmentationContext_T::AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector)
12399{
12400 for (; start < end; ++start)
12401 {
12402 VmaDeviceMemoryBlock* dstBlock = vector.GetBlock(index: start);
12403 if (dstBlock->m_pMetadata->GetSumFreeSize() >= data.size)
12404 {
12405 if (vector.AllocateFromBlock(pBlock: dstBlock,
12406 size: data.size,
12407 alignment: data.alignment,
12408 allocFlags: data.flags,
12409 pUserData: this,
12410 suballocType: data.type,
12411 strategy: 0,
12412 pAllocation: &data.move.dstTmpAllocation) == VK_SUCCESS)
12413 {
12414 m_Moves.push_back(src: data.move);
12415 if (IncrementCounters(bytes: data.size))
12416 return true;
12417 break;
12418 }
12419 }
12420 }
12421 return false;
12422}
12423
12424bool VmaDefragmentationContext_T::ComputeDefragmentation_Fast(VmaBlockVector& vector)
12425{
12426 // Move only between blocks
12427
12428 // Go through allocations in last blocks and try to fit them inside first ones
12429 for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
12430 {
12431 VmaBlockMetadata* metadata = vector.GetBlock(index: i)->m_pMetadata;
12432
12433 for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
12434 handle != VK_NULL_HANDLE;
12435 handle = metadata->GetNextAllocation(prevAlloc: handle))
12436 {
12437 MoveAllocationData moveData = GetMoveData(handle, metadata);
12438 // Ignore newly created allocations by defragmentation algorithm
12439 if (moveData.move.srcAllocation->GetUserData() == this)
12440 continue;
12441 switch (CheckCounters(bytes: moveData.move.srcAllocation->GetSize()))
12442 {
12443 case CounterStatus::Ignore:
12444 continue;
12445 case CounterStatus::End:
12446 return true;
12447 case CounterStatus::Pass:
12448 break;
12449 default:
12450 VMA_ASSERT(0);
12451 }
12452
12453 // Check all previous blocks for free space
12454 if (AllocInOtherBlock(start: 0, end: i, data&: moveData, vector))
12455 return true;
12456 }
12457 }
12458 return false;
12459}
12460
12461bool VmaDefragmentationContext_T::ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update)
12462{
12463 // Go over every allocation and try to fit it in previous blocks at lowest offsets,
12464 // if not possible: realloc within single block to minimize offset (exclude offset == 0),
12465 // but only if there are noticeable gaps between them (some heuristic, ex. average size of allocation in block)
12466 VMA_ASSERT(m_AlgorithmState != VMA_NULL);
12467
12468 StateBalanced& vectorState = reinterpret_cast<StateBalanced*>(m_AlgorithmState)[index];
12469 if (update && vectorState.avgAllocSize == UINT64_MAX)
12470 UpdateVectorStatistics(vector, state&: vectorState);
12471
12472 const size_t startMoveCount = m_Moves.size();
12473 VkDeviceSize minimalFreeRegion = vectorState.avgFreeSize / 2;
12474 for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
12475 {
12476 VmaDeviceMemoryBlock* block = vector.GetBlock(index: i);
12477 VmaBlockMetadata* metadata = block->m_pMetadata;
12478 VkDeviceSize prevFreeRegionSize = 0;
12479
12480 for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
12481 handle != VK_NULL_HANDLE;
12482 handle = metadata->GetNextAllocation(prevAlloc: handle))
12483 {
12484 MoveAllocationData moveData = GetMoveData(handle, metadata);
12485 // Ignore newly created allocations by defragmentation algorithm
12486 if (moveData.move.srcAllocation->GetUserData() == this)
12487 continue;
12488 switch (CheckCounters(bytes: moveData.move.srcAllocation->GetSize()))
12489 {
12490 case CounterStatus::Ignore:
12491 continue;
12492 case CounterStatus::End:
12493 return true;
12494 case CounterStatus::Pass:
12495 break;
12496 default:
12497 VMA_ASSERT(0);
12498 }
12499
12500 // Check all previous blocks for free space
12501 const size_t prevMoveCount = m_Moves.size();
12502 if (AllocInOtherBlock(start: 0, end: i, data&: moveData, vector))
12503 return true;
12504
12505 VkDeviceSize nextFreeRegionSize = metadata->GetNextFreeRegionSize(alloc: handle);
12506 // If no room found then realloc within block for lower offset
12507 VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
12508 if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
12509 {
12510 // Check if realloc will make sense
12511 if (prevFreeRegionSize >= minimalFreeRegion ||
12512 nextFreeRegionSize >= minimalFreeRegion ||
12513 moveData.size <= vectorState.avgFreeSize ||
12514 moveData.size <= vectorState.avgAllocSize)
12515 {
12516 VmaAllocationRequest request = {};
12517 if (metadata->CreateAllocationRequest(
12518 allocSize: moveData.size,
12519 allocAlignment: moveData.alignment,
12520 upperAddress: false,
12521 allocType: moveData.type,
12522 strategy: VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
12523 pAllocationRequest: &request))
12524 {
12525 if (metadata->GetAllocationOffset(allocHandle: request.allocHandle) < offset)
12526 {
12527 if (vector.CommitAllocationRequest(
12528 allocRequest&: request,
12529 pBlock: block,
12530 alignment: moveData.alignment,
12531 allocFlags: moveData.flags,
12532 pUserData: this,
12533 suballocType: moveData.type,
12534 pAllocation: &moveData.move.dstTmpAllocation) == VK_SUCCESS)
12535 {
12536 m_Moves.push_back(src: moveData.move);
12537 if (IncrementCounters(bytes: moveData.size))
12538 return true;
12539 }
12540 }
12541 }
12542 }
12543 }
12544 prevFreeRegionSize = nextFreeRegionSize;
12545 }
12546 }
12547
12548 // No moves performed, update statistics to current vector state
12549 if (startMoveCount == m_Moves.size() && !update)
12550 {
12551 vectorState.avgAllocSize = UINT64_MAX;
12552 return ComputeDefragmentation_Balanced(vector, index, update: false);
12553 }
12554 return false;
12555}
12556
12557bool VmaDefragmentationContext_T::ComputeDefragmentation_Full(VmaBlockVector& vector)
12558{
12559 // Go over every allocation and try to fit it in previous blocks at lowest offsets,
12560 // if not possible: realloc within single block to minimize offset (exclude offset == 0)
12561
12562 for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
12563 {
12564 VmaDeviceMemoryBlock* block = vector.GetBlock(index: i);
12565 VmaBlockMetadata* metadata = block->m_pMetadata;
12566
12567 for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
12568 handle != VK_NULL_HANDLE;
12569 handle = metadata->GetNextAllocation(prevAlloc: handle))
12570 {
12571 MoveAllocationData moveData = GetMoveData(handle, metadata);
12572 // Ignore newly created allocations by defragmentation algorithm
12573 if (moveData.move.srcAllocation->GetUserData() == this)
12574 continue;
12575 switch (CheckCounters(bytes: moveData.move.srcAllocation->GetSize()))
12576 {
12577 case CounterStatus::Ignore:
12578 continue;
12579 case CounterStatus::End:
12580 return true;
12581 case CounterStatus::Pass:
12582 break;
12583 default:
12584 VMA_ASSERT(0);
12585 }
12586
12587 // Check all previous blocks for free space
12588 const size_t prevMoveCount = m_Moves.size();
12589 if (AllocInOtherBlock(start: 0, end: i, data&: moveData, vector))
12590 return true;
12591
12592 // If no room found then realloc within block for lower offset
12593 VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
12594 if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
12595 {
12596 VmaAllocationRequest request = {};
12597 if (metadata->CreateAllocationRequest(
12598 allocSize: moveData.size,
12599 allocAlignment: moveData.alignment,
12600 upperAddress: false,
12601 allocType: moveData.type,
12602 strategy: VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
12603 pAllocationRequest: &request))
12604 {
12605 if (metadata->GetAllocationOffset(allocHandle: request.allocHandle) < offset)
12606 {
12607 if (vector.CommitAllocationRequest(
12608 allocRequest&: request,
12609 pBlock: block,
12610 alignment: moveData.alignment,
12611 allocFlags: moveData.flags,
12612 pUserData: this,
12613 suballocType: moveData.type,
12614 pAllocation: &moveData.move.dstTmpAllocation) == VK_SUCCESS)
12615 {
12616 m_Moves.push_back(src: moveData.move);
12617 if (IncrementCounters(bytes: moveData.size))
12618 return true;
12619 }
12620 }
12621 }
12622 }
12623 }
12624 }
12625 return false;
12626}
12627
12628bool VmaDefragmentationContext_T::ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index)
12629{
12630 // First free single block, then populate it to the brim, then free another block, and so on
12631
12632 // Fallback to previous algorithm since without granularity conflicts it can achieve max packing
12633 if (vector.m_BufferImageGranularity == 1)
12634 return ComputeDefragmentation_Full(vector);
12635
12636 VMA_ASSERT(m_AlgorithmState != VMA_NULL);
12637
12638 StateExtensive& vectorState = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[index];
12639
12640 bool texturePresent = false, bufferPresent = false, otherPresent = false;
12641 switch (vectorState.operation)
12642 {
12643 case StateExtensive::Operation::Done: // Vector defragmented
12644 return false;
12645 case StateExtensive::Operation::FindFreeBlockBuffer:
12646 case StateExtensive::Operation::FindFreeBlockTexture:
12647 case StateExtensive::Operation::FindFreeBlockAll:
12648 {
12649 // No more blocks to free, just perform fast realloc and move to cleanup
12650 if (vectorState.firstFreeBlock == 0)
12651 {
12652 vectorState.operation = StateExtensive::Operation::Cleanup;
12653 return ComputeDefragmentation_Fast(vector);
12654 }
12655
12656 // No free blocks, have to clear last one
12657 size_t last = (vectorState.firstFreeBlock == SIZE_MAX ? vector.GetBlockCount() : vectorState.firstFreeBlock) - 1;
12658 VmaBlockMetadata* freeMetadata = vector.GetBlock(index: last)->m_pMetadata;
12659
12660 const size_t prevMoveCount = m_Moves.size();
12661 for (VmaAllocHandle handle = freeMetadata->GetAllocationListBegin();
12662 handle != VK_NULL_HANDLE;
12663 handle = freeMetadata->GetNextAllocation(prevAlloc: handle))
12664 {
12665 MoveAllocationData moveData = GetMoveData(handle, metadata: freeMetadata);
12666 switch (CheckCounters(bytes: moveData.move.srcAllocation->GetSize()))
12667 {
12668 case CounterStatus::Ignore:
12669 continue;
12670 case CounterStatus::End:
12671 return true;
12672 case CounterStatus::Pass:
12673 break;
12674 default:
12675 VMA_ASSERT(0);
12676 }
12677
12678 // Check all previous blocks for free space
12679 if (AllocInOtherBlock(start: 0, end: last, data&: moveData, vector))
12680 {
12681 // Full clear performed already
12682 if (prevMoveCount != m_Moves.size() && freeMetadata->GetNextAllocation(prevAlloc: handle) == VK_NULL_HANDLE)
12683 vectorState.firstFreeBlock = last;
12684 return true;
12685 }
12686 }
12687
12688 if (prevMoveCount == m_Moves.size())
12689 {
12690 // Cannot perform full clear, have to move data in other blocks around
12691 if (last != 0)
12692 {
12693 for (size_t i = last - 1; i; --i)
12694 {
12695 if (ReallocWithinBlock(vector, block: vector.GetBlock(index: i)))
12696 return true;
12697 }
12698 }
12699
12700 if (prevMoveCount == m_Moves.size())
12701 {
12702 // No possible reallocs within blocks, try to move them around fast
12703 return ComputeDefragmentation_Fast(vector);
12704 }
12705 }
12706 else
12707 {
12708 switch (vectorState.operation)
12709 {
12710 case StateExtensive::Operation::FindFreeBlockBuffer:
12711 vectorState.operation = StateExtensive::Operation::MoveBuffers;
12712 break;
12713 case StateExtensive::Operation::FindFreeBlockTexture:
12714 vectorState.operation = StateExtensive::Operation::MoveTextures;
12715 break;
12716 case StateExtensive::Operation::FindFreeBlockAll:
12717 vectorState.operation = StateExtensive::Operation::MoveAll;
12718 break;
12719 default:
12720 VMA_ASSERT(0);
12721 vectorState.operation = StateExtensive::Operation::MoveTextures;
12722 }
12723 vectorState.firstFreeBlock = last;
12724 // Nothing done, block found without reallocations, can perform another reallocs in same pass
12725 return ComputeDefragmentation_Extensive(vector, index);
12726 }
12727 break;
12728 }
12729 case StateExtensive::Operation::MoveTextures:
12730 {
12731 if (MoveDataToFreeBlocks(currentType: VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL, vector,
12732 firstFreeBlock: vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
12733 {
12734 if (texturePresent)
12735 {
12736 vectorState.operation = StateExtensive::Operation::FindFreeBlockTexture;
12737 return ComputeDefragmentation_Extensive(vector, index);
12738 }
12739
12740 if (!bufferPresent && !otherPresent)
12741 {
12742 vectorState.operation = StateExtensive::Operation::Cleanup;
12743 break;
12744 }
12745
12746 // No more textures to move, check buffers
12747 vectorState.operation = StateExtensive::Operation::MoveBuffers;
12748 bufferPresent = false;
12749 otherPresent = false;
12750 }
12751 else
12752 break;
12753 VMA_FALLTHROUGH; // Fallthrough
12754 }
12755 case StateExtensive::Operation::MoveBuffers:
12756 {
12757 if (MoveDataToFreeBlocks(currentType: VMA_SUBALLOCATION_TYPE_BUFFER, vector,
12758 firstFreeBlock: vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
12759 {
12760 if (bufferPresent)
12761 {
12762 vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
12763 return ComputeDefragmentation_Extensive(vector, index);
12764 }
12765
12766 if (!otherPresent)
12767 {
12768 vectorState.operation = StateExtensive::Operation::Cleanup;
12769 break;
12770 }
12771
12772 // No more buffers to move, check all others
12773 vectorState.operation = StateExtensive::Operation::MoveAll;
12774 otherPresent = false;
12775 }
12776 else
12777 break;
12778 VMA_FALLTHROUGH; // Fallthrough
12779 }
12780 case StateExtensive::Operation::MoveAll:
12781 {
12782 if (MoveDataToFreeBlocks(currentType: VMA_SUBALLOCATION_TYPE_FREE, vector,
12783 firstFreeBlock: vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
12784 {
12785 if (otherPresent)
12786 {
12787 vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
12788 return ComputeDefragmentation_Extensive(vector, index);
12789 }
12790 // Everything moved
12791 vectorState.operation = StateExtensive::Operation::Cleanup;
12792 }
12793 break;
12794 }
12795 case StateExtensive::Operation::Cleanup:
12796 // Cleanup is handled below so that other operations may reuse the cleanup code. This case is here to prevent the unhandled enum value warning (C4062).
12797 break;
12798 }
12799
12800 if (vectorState.operation == StateExtensive::Operation::Cleanup)
12801 {
12802 // All other work done, pack data in blocks even tighter if possible
12803 const size_t prevMoveCount = m_Moves.size();
12804 for (size_t i = 0; i < vector.GetBlockCount(); ++i)
12805 {
12806 if (ReallocWithinBlock(vector, block: vector.GetBlock(index: i)))
12807 return true;
12808 }
12809
12810 if (prevMoveCount == m_Moves.size())
12811 vectorState.operation = StateExtensive::Operation::Done;
12812 }
12813 return false;
12814}
12815
12816void VmaDefragmentationContext_T::UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state)
12817{
12818 size_t allocCount = 0;
12819 size_t freeCount = 0;
12820 state.avgFreeSize = 0;
12821 state.avgAllocSize = 0;
12822
12823 for (size_t i = 0; i < vector.GetBlockCount(); ++i)
12824 {
12825 VmaBlockMetadata* metadata = vector.GetBlock(index: i)->m_pMetadata;
12826
12827 allocCount += metadata->GetAllocationCount();
12828 freeCount += metadata->GetFreeRegionsCount();
12829 state.avgFreeSize += metadata->GetSumFreeSize();
12830 state.avgAllocSize += metadata->GetSize();
12831 }
12832
12833 state.avgAllocSize = (state.avgAllocSize - state.avgFreeSize) / allocCount;
12834 state.avgFreeSize /= freeCount;
12835}
12836
12837bool VmaDefragmentationContext_T::MoveDataToFreeBlocks(VmaSuballocationType currentType,
12838 VmaBlockVector& vector, size_t firstFreeBlock,
12839 bool& texturePresent, bool& bufferPresent, bool& otherPresent)
12840{
12841 const size_t prevMoveCount = m_Moves.size();
12842 for (size_t i = firstFreeBlock ; i;)
12843 {
12844 VmaDeviceMemoryBlock* block = vector.GetBlock(index: --i);
12845 VmaBlockMetadata* metadata = block->m_pMetadata;
12846
12847 for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
12848 handle != VK_NULL_HANDLE;
12849 handle = metadata->GetNextAllocation(prevAlloc: handle))
12850 {
12851 MoveAllocationData moveData = GetMoveData(handle, metadata);
12852 // Ignore newly created allocations by defragmentation algorithm
12853 if (moveData.move.srcAllocation->GetUserData() == this)
12854 continue;
12855 switch (CheckCounters(bytes: moveData.move.srcAllocation->GetSize()))
12856 {
12857 case CounterStatus::Ignore:
12858 continue;
12859 case CounterStatus::End:
12860 return true;
12861 case CounterStatus::Pass:
12862 break;
12863 default:
12864 VMA_ASSERT(0);
12865 }
12866
12867 // Move only single type of resources at once
12868 if (!VmaIsBufferImageGranularityConflict(suballocType1: moveData.type, suballocType2: currentType))
12869 {
12870 // Try to fit allocation into free blocks
12871 if (AllocInOtherBlock(start: firstFreeBlock, end: vector.GetBlockCount(), data&: moveData, vector))
12872 return false;
12873 }
12874
12875 if (!VmaIsBufferImageGranularityConflict(suballocType1: moveData.type, suballocType2: VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL))
12876 texturePresent = true;
12877 else if (!VmaIsBufferImageGranularityConflict(suballocType1: moveData.type, suballocType2: VMA_SUBALLOCATION_TYPE_BUFFER))
12878 bufferPresent = true;
12879 else
12880 otherPresent = true;
12881 }
12882 }
12883 return prevMoveCount == m_Moves.size();
12884}
12885#endif // _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
12886
12887#ifndef _VMA_POOL_T_FUNCTIONS
12888VmaPool_T::VmaPool_T(
12889 VmaAllocator hAllocator,
12890 const VmaPoolCreateInfo& createInfo,
12891 VkDeviceSize preferredBlockSize)
12892 : m_BlockVector(
12893 hAllocator,
12894 this, // hParentPool
12895 createInfo.memoryTypeIndex,
12896 createInfo.blockSize != 0 ? createInfo.blockSize : preferredBlockSize,
12897 createInfo.minBlockCount,
12898 createInfo.maxBlockCount,
12899 (createInfo.flags& VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
12900 createInfo.blockSize != 0, // explicitBlockSize
12901 createInfo.flags & VMA_POOL_CREATE_ALGORITHM_MASK, // algorithm
12902 createInfo.priority,
12903 VMA_MAX(hAllocator->GetMemoryTypeMinAlignment(createInfo.memoryTypeIndex), createInfo.minAllocationAlignment),
12904 createInfo.pMemoryAllocateNext),
12905 m_Id(0),
12906 m_Name(VMA_NULL) {}
12907
12908VmaPool_T::~VmaPool_T()
12909{
12910 VMA_ASSERT(m_PrevPool == VMA_NULL && m_NextPool == VMA_NULL);
12911
12912 const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
12913 VmaFreeString(allocs, str: m_Name);
12914}
12915
12916void VmaPool_T::SetName(const char* pName)
12917{
12918 const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
12919 VmaFreeString(allocs, str: m_Name);
12920
12921 if (pName != VMA_NULL)
12922 {
12923 m_Name = VmaCreateStringCopy(allocs, srcStr: pName);
12924 }
12925 else
12926 {
12927 m_Name = VMA_NULL;
12928 }
12929}
12930#endif // _VMA_POOL_T_FUNCTIONS
12931
12932#ifndef _VMA_ALLOCATOR_T_FUNCTIONS
12933VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
12934 m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
12935 m_VulkanApiVersion(pCreateInfo->vulkanApiVersion != 0 ? pCreateInfo->vulkanApiVersion : VK_API_VERSION_1_0),
12936 m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
12937 m_UseKhrBindMemory2((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0),
12938 m_UseExtMemoryBudget((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0),
12939 m_UseAmdDeviceCoherentMemory((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT) != 0),
12940 m_UseKhrBufferDeviceAddress((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT) != 0),
12941 m_UseExtMemoryPriority((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT) != 0),
12942 m_UseKhrMaintenance4((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT) != 0),
12943 m_UseKhrMaintenance5((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT) != 0),
12944 m_UseKhrExternalMemoryWin32((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_EXTERNAL_MEMORY_WIN32_BIT) != 0),
12945 m_hDevice(pCreateInfo->device),
12946 m_hInstance(pCreateInfo->instance),
12947 m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
12948 m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
12949 *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
12950 m_AllocationObjectAllocator(&m_AllocationCallbacks),
12951 m_HeapSizeLimitMask(0),
12952 m_DeviceMemoryCount(0),
12953 m_PreferredLargeHeapBlockSize(0),
12954 m_PhysicalDevice(pCreateInfo->physicalDevice),
12955 m_GpuDefragmentationMemoryTypeBits(UINT32_MAX),
12956 m_NextPoolId(0),
12957 m_GlobalMemoryTypeBits(UINT32_MAX)
12958{
12959 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
12960 {
12961 m_UseKhrDedicatedAllocation = false;
12962 m_UseKhrBindMemory2 = false;
12963 }
12964
12965 if(VMA_DEBUG_DETECT_CORRUPTION)
12966 {
12967 // Needs to be multiply of uint32_t size because we are going to write VMA_CORRUPTION_DETECTION_MAGIC_VALUE to it.
12968 VMA_ASSERT(VMA_DEBUG_MARGIN % sizeof(uint32_t) == 0);
12969 }
12970
12971 VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device && pCreateInfo->instance);
12972
12973 if(m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0))
12974 {
12975#if !(VMA_DEDICATED_ALLOCATION)
12976 if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0)
12977 {
12978 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
12979 }
12980#endif
12981#if !(VMA_BIND_MEMORY2)
12982 if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0)
12983 {
12984 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT set but required extension is disabled by preprocessor macros.");
12985 }
12986#endif
12987 }
12988#if !(VMA_MEMORY_BUDGET)
12989 if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0)
12990 {
12991 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT set but required extension is disabled by preprocessor macros.");
12992 }
12993#endif
12994#if !(VMA_BUFFER_DEVICE_ADDRESS)
12995 if(m_UseKhrBufferDeviceAddress)
12996 {
12997 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT is set but required extension or Vulkan 1.2 is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
12998 }
12999#endif
13000#if VMA_VULKAN_VERSION < 1004000
13001 VMA_ASSERT(m_VulkanApiVersion < VK_MAKE_VERSION(1, 4, 0) && "vulkanApiVersion >= VK_API_VERSION_1_4 but required Vulkan version is disabled by preprocessor macros.");
13002#endif
13003#if VMA_VULKAN_VERSION < 1003000
13004 VMA_ASSERT(m_VulkanApiVersion < VK_MAKE_VERSION(1, 3, 0) && "vulkanApiVersion >= VK_API_VERSION_1_3 but required Vulkan version is disabled by preprocessor macros.");
13005#endif
13006#if VMA_VULKAN_VERSION < 1002000
13007 VMA_ASSERT(m_VulkanApiVersion < VK_MAKE_VERSION(1, 2, 0) && "vulkanApiVersion >= VK_API_VERSION_1_2 but required Vulkan version is disabled by preprocessor macros.");
13008#endif
13009#if VMA_VULKAN_VERSION < 1001000
13010 VMA_ASSERT(m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0) && "vulkanApiVersion >= VK_API_VERSION_1_1 but required Vulkan version is disabled by preprocessor macros.");
13011#endif
13012#if !(VMA_MEMORY_PRIORITY)
13013 if(m_UseExtMemoryPriority)
13014 {
13015 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
13016 }
13017#endif
13018#if !(VMA_KHR_MAINTENANCE4)
13019 if(m_UseKhrMaintenance4)
13020 {
13021 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
13022 }
13023#endif
13024#if !(VMA_KHR_MAINTENANCE5)
13025 if(m_UseKhrMaintenance5)
13026 {
13027 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
13028 }
13029#endif
13030#if !(VMA_KHR_MAINTENANCE5)
13031 if(m_UseKhrMaintenance5)
13032 {
13033 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
13034 }
13035#endif
13036
13037#if !(VMA_EXTERNAL_MEMORY_WIN32)
13038 if(m_UseKhrExternalMemoryWin32)
13039 {
13040 VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_EXTERNAL_MEMORY_WIN32_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
13041 }
13042#endif
13043
13044 memset(s: &m_DeviceMemoryCallbacks, c: 0 ,n: sizeof(m_DeviceMemoryCallbacks));
13045 memset(s: &m_PhysicalDeviceProperties, c: 0, n: sizeof(m_PhysicalDeviceProperties));
13046 memset(s: &m_MemProps, c: 0, n: sizeof(m_MemProps));
13047
13048 memset(s: &m_pBlockVectors, c: 0, n: sizeof(m_pBlockVectors));
13049 memset(s: &m_VulkanFunctions, c: 0, n: sizeof(m_VulkanFunctions));
13050
13051#if VMA_EXTERNAL_MEMORY
13052 memset(s: &m_TypeExternalMemoryHandleTypes, c: 0, n: sizeof(m_TypeExternalMemoryHandleTypes));
13053#endif // #if VMA_EXTERNAL_MEMORY
13054
13055 if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
13056 {
13057 m_DeviceMemoryCallbacks.pUserData = pCreateInfo->pDeviceMemoryCallbacks->pUserData;
13058 m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
13059 m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
13060 }
13061
13062 ImportVulkanFunctions(pVulkanFunctions: pCreateInfo->pVulkanFunctions);
13063
13064 (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
13065 (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
13066
13067 VMA_ASSERT(VmaIsPow2(VMA_MIN_ALIGNMENT));
13068 VMA_ASSERT(VmaIsPow2(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY));
13069 VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.bufferImageGranularity));
13070 VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.nonCoherentAtomSize));
13071
13072 m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
13073 pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
13074
13075 m_GlobalMemoryTypeBits = CalculateGlobalMemoryTypeBits();
13076
13077#if VMA_EXTERNAL_MEMORY
13078 if(pCreateInfo->pTypeExternalMemoryHandleTypes != VMA_NULL)
13079 {
13080 memcpy(dest: m_TypeExternalMemoryHandleTypes, src: pCreateInfo->pTypeExternalMemoryHandleTypes,
13081 n: sizeof(VkExternalMemoryHandleTypeFlagsKHR) * GetMemoryTypeCount());
13082 }
13083#endif // #if VMA_EXTERNAL_MEMORY
13084
13085 if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
13086 {
13087 for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
13088 {
13089 const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
13090 if(limit != VK_WHOLE_SIZE)
13091 {
13092 m_HeapSizeLimitMask |= 1u << heapIndex;
13093 if(limit < m_MemProps.memoryHeaps[heapIndex].size)
13094 {
13095 m_MemProps.memoryHeaps[heapIndex].size = limit;
13096 }
13097 }
13098 }
13099 }
13100
13101 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
13102 {
13103 // Create only supported types
13104 if((m_GlobalMemoryTypeBits & (1u << memTypeIndex)) != 0)
13105 {
13106 const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
13107 m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
13108 this,
13109 VK_NULL_HANDLE, // hParentPool
13110 memTypeIndex,
13111 preferredBlockSize,
13112 0,
13113 SIZE_MAX,
13114 GetBufferImageGranularity(),
13115 false, // explicitBlockSize
13116 0, // algorithm
13117 0.5f, // priority (0.5 is the default per Vulkan spec)
13118 GetMemoryTypeMinAlignment(memTypeIndex), // minAllocationAlignment
13119 VMA_NULL); // // pMemoryAllocateNext
13120 // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
13121 // because minBlockCount is 0.
13122 }
13123 }
13124}
13125
13126VkResult VmaAllocator_T::Init(const VmaAllocatorCreateInfo* pCreateInfo)
13127{
13128 VkResult res = VK_SUCCESS;
13129
13130#if VMA_MEMORY_BUDGET
13131 if(m_UseExtMemoryBudget)
13132 {
13133 UpdateVulkanBudget();
13134 }
13135#endif // #if VMA_MEMORY_BUDGET
13136
13137 return res;
13138}
13139
13140VmaAllocator_T::~VmaAllocator_T()
13141{
13142 VMA_ASSERT(m_Pools.IsEmpty());
13143
13144 for(size_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
13145 {
13146 vma_delete(hAllocator: this, ptr: m_pBlockVectors[memTypeIndex]);
13147 }
13148}
13149
13150void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
13151{
13152#if VMA_STATIC_VULKAN_FUNCTIONS == 1
13153 ImportVulkanFunctions_Static();
13154#endif
13155
13156 if(pVulkanFunctions != VMA_NULL)
13157 {
13158 ImportVulkanFunctions_Custom(pVulkanFunctions);
13159 }
13160
13161#if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
13162 ImportVulkanFunctions_Dynamic();
13163#endif
13164
13165 ValidateVulkanFunctions();
13166}
13167
13168#if VMA_STATIC_VULKAN_FUNCTIONS == 1
13169
13170void VmaAllocator_T::ImportVulkanFunctions_Static()
13171{
13172 // Vulkan 1.0
13173 m_VulkanFunctions.vkGetInstanceProcAddr = (PFN_vkGetInstanceProcAddr)vkGetInstanceProcAddr;
13174 m_VulkanFunctions.vkGetDeviceProcAddr = (PFN_vkGetDeviceProcAddr)vkGetDeviceProcAddr;
13175 m_VulkanFunctions.vkGetPhysicalDeviceProperties = (PFN_vkGetPhysicalDeviceProperties)vkGetPhysicalDeviceProperties;
13176 m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = (PFN_vkGetPhysicalDeviceMemoryProperties)vkGetPhysicalDeviceMemoryProperties;
13177 m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
13178 m_VulkanFunctions.vkFreeMemory = (PFN_vkFreeMemory)vkFreeMemory;
13179 m_VulkanFunctions.vkMapMemory = (PFN_vkMapMemory)vkMapMemory;
13180 m_VulkanFunctions.vkUnmapMemory = (PFN_vkUnmapMemory)vkUnmapMemory;
13181 m_VulkanFunctions.vkFlushMappedMemoryRanges = (PFN_vkFlushMappedMemoryRanges)vkFlushMappedMemoryRanges;
13182 m_VulkanFunctions.vkInvalidateMappedMemoryRanges = (PFN_vkInvalidateMappedMemoryRanges)vkInvalidateMappedMemoryRanges;
13183 m_VulkanFunctions.vkBindBufferMemory = (PFN_vkBindBufferMemory)vkBindBufferMemory;
13184 m_VulkanFunctions.vkBindImageMemory = (PFN_vkBindImageMemory)vkBindImageMemory;
13185 m_VulkanFunctions.vkGetBufferMemoryRequirements = (PFN_vkGetBufferMemoryRequirements)vkGetBufferMemoryRequirements;
13186 m_VulkanFunctions.vkGetImageMemoryRequirements = (PFN_vkGetImageMemoryRequirements)vkGetImageMemoryRequirements;
13187 m_VulkanFunctions.vkCreateBuffer = (PFN_vkCreateBuffer)vkCreateBuffer;
13188 m_VulkanFunctions.vkDestroyBuffer = (PFN_vkDestroyBuffer)vkDestroyBuffer;
13189 m_VulkanFunctions.vkCreateImage = (PFN_vkCreateImage)vkCreateImage;
13190 m_VulkanFunctions.vkDestroyImage = (PFN_vkDestroyImage)vkDestroyImage;
13191 m_VulkanFunctions.vkCmdCopyBuffer = (PFN_vkCmdCopyBuffer)vkCmdCopyBuffer;
13192
13193 // Vulkan 1.1
13194#if VMA_VULKAN_VERSION >= 1001000
13195 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13196 {
13197 m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR = (PFN_vkGetBufferMemoryRequirements2)vkGetBufferMemoryRequirements2;
13198 m_VulkanFunctions.vkGetImageMemoryRequirements2KHR = (PFN_vkGetImageMemoryRequirements2)vkGetImageMemoryRequirements2;
13199 m_VulkanFunctions.vkBindBufferMemory2KHR = (PFN_vkBindBufferMemory2)vkBindBufferMemory2;
13200 m_VulkanFunctions.vkBindImageMemory2KHR = (PFN_vkBindImageMemory2)vkBindImageMemory2;
13201 }
13202#endif
13203
13204#if VMA_VULKAN_VERSION >= 1001000
13205 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13206 {
13207 m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR = (PFN_vkGetPhysicalDeviceMemoryProperties2)vkGetPhysicalDeviceMemoryProperties2;
13208 }
13209#endif
13210
13211#if VMA_VULKAN_VERSION >= 1003000
13212 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
13213 {
13214 m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements = (PFN_vkGetDeviceBufferMemoryRequirements)vkGetDeviceBufferMemoryRequirements;
13215 m_VulkanFunctions.vkGetDeviceImageMemoryRequirements = (PFN_vkGetDeviceImageMemoryRequirements)vkGetDeviceImageMemoryRequirements;
13216 }
13217#endif
13218}
13219
13220#endif // VMA_STATIC_VULKAN_FUNCTIONS == 1
13221
13222void VmaAllocator_T::ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions)
13223{
13224 VMA_ASSERT(pVulkanFunctions != VMA_NULL);
13225
13226#define VMA_COPY_IF_NOT_NULL(funcName) \
13227 if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
13228
13229 VMA_COPY_IF_NOT_NULL(vkGetInstanceProcAddr);
13230 VMA_COPY_IF_NOT_NULL(vkGetDeviceProcAddr);
13231 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
13232 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
13233 VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
13234 VMA_COPY_IF_NOT_NULL(vkFreeMemory);
13235 VMA_COPY_IF_NOT_NULL(vkMapMemory);
13236 VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
13237 VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
13238 VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
13239 VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
13240 VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
13241 VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
13242 VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
13243 VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
13244 VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
13245 VMA_COPY_IF_NOT_NULL(vkCreateImage);
13246 VMA_COPY_IF_NOT_NULL(vkDestroyImage);
13247 VMA_COPY_IF_NOT_NULL(vkCmdCopyBuffer);
13248
13249#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13250 VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
13251 VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
13252#endif
13253
13254#if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
13255 VMA_COPY_IF_NOT_NULL(vkBindBufferMemory2KHR);
13256 VMA_COPY_IF_NOT_NULL(vkBindImageMemory2KHR);
13257#endif
13258
13259#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
13260 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties2KHR);
13261#endif
13262
13263#if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
13264 VMA_COPY_IF_NOT_NULL(vkGetDeviceBufferMemoryRequirements);
13265 VMA_COPY_IF_NOT_NULL(vkGetDeviceImageMemoryRequirements);
13266#endif
13267#if VMA_EXTERNAL_MEMORY_WIN32
13268 VMA_COPY_IF_NOT_NULL(vkGetMemoryWin32HandleKHR);
13269#endif
13270#undef VMA_COPY_IF_NOT_NULL
13271}
13272
13273#if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
13274
13275void VmaAllocator_T::ImportVulkanFunctions_Dynamic()
13276{
13277 VMA_ASSERT(m_VulkanFunctions.vkGetInstanceProcAddr && m_VulkanFunctions.vkGetDeviceProcAddr &&
13278 "To use VMA_DYNAMIC_VULKAN_FUNCTIONS in new versions of VMA you now have to pass "
13279 "VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as VmaAllocatorCreateInfo::pVulkanFunctions. "
13280 "Other members can be null.");
13281
13282#define VMA_FETCH_INSTANCE_FUNC(memberName, functionPointerType, functionNameString) \
13283 if(m_VulkanFunctions.memberName == VMA_NULL) \
13284 m_VulkanFunctions.memberName = \
13285 (functionPointerType)m_VulkanFunctions.vkGetInstanceProcAddr(m_hInstance, functionNameString);
13286#define VMA_FETCH_DEVICE_FUNC(memberName, functionPointerType, functionNameString) \
13287 if(m_VulkanFunctions.memberName == VMA_NULL) \
13288 m_VulkanFunctions.memberName = \
13289 (functionPointerType)m_VulkanFunctions.vkGetDeviceProcAddr(m_hDevice, functionNameString);
13290
13291 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceProperties, PFN_vkGetPhysicalDeviceProperties, "vkGetPhysicalDeviceProperties");
13292 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties, PFN_vkGetPhysicalDeviceMemoryProperties, "vkGetPhysicalDeviceMemoryProperties");
13293 VMA_FETCH_DEVICE_FUNC(vkAllocateMemory, PFN_vkAllocateMemory, "vkAllocateMemory");
13294 VMA_FETCH_DEVICE_FUNC(vkFreeMemory, PFN_vkFreeMemory, "vkFreeMemory");
13295 VMA_FETCH_DEVICE_FUNC(vkMapMemory, PFN_vkMapMemory, "vkMapMemory");
13296 VMA_FETCH_DEVICE_FUNC(vkUnmapMemory, PFN_vkUnmapMemory, "vkUnmapMemory");
13297 VMA_FETCH_DEVICE_FUNC(vkFlushMappedMemoryRanges, PFN_vkFlushMappedMemoryRanges, "vkFlushMappedMemoryRanges");
13298 VMA_FETCH_DEVICE_FUNC(vkInvalidateMappedMemoryRanges, PFN_vkInvalidateMappedMemoryRanges, "vkInvalidateMappedMemoryRanges");
13299 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory, PFN_vkBindBufferMemory, "vkBindBufferMemory");
13300 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory, PFN_vkBindImageMemory, "vkBindImageMemory");
13301 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements, PFN_vkGetBufferMemoryRequirements, "vkGetBufferMemoryRequirements");
13302 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements, PFN_vkGetImageMemoryRequirements, "vkGetImageMemoryRequirements");
13303 VMA_FETCH_DEVICE_FUNC(vkCreateBuffer, PFN_vkCreateBuffer, "vkCreateBuffer");
13304 VMA_FETCH_DEVICE_FUNC(vkDestroyBuffer, PFN_vkDestroyBuffer, "vkDestroyBuffer");
13305 VMA_FETCH_DEVICE_FUNC(vkCreateImage, PFN_vkCreateImage, "vkCreateImage");
13306 VMA_FETCH_DEVICE_FUNC(vkDestroyImage, PFN_vkDestroyImage, "vkDestroyImage");
13307 VMA_FETCH_DEVICE_FUNC(vkCmdCopyBuffer, PFN_vkCmdCopyBuffer, "vkCmdCopyBuffer");
13308
13309#if VMA_VULKAN_VERSION >= 1001000
13310 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13311 {
13312 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2, "vkGetBufferMemoryRequirements2");
13313 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2, "vkGetImageMemoryRequirements2");
13314 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2, "vkBindBufferMemory2");
13315 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2, "vkBindImageMemory2");
13316 }
13317#endif
13318
13319#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
13320 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13321 {
13322 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2");
13323 // Try to fetch the pointer from the other name, based on suspected driver bug - see issue #410.
13324 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2KHR");
13325 }
13326 else if(m_UseExtMemoryBudget)
13327 {
13328 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2KHR");
13329 // Try to fetch the pointer from the other name, based on suspected driver bug - see issue #410.
13330 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2");
13331 }
13332#endif
13333
13334#if VMA_DEDICATED_ALLOCATION
13335 if(m_UseKhrDedicatedAllocation)
13336 {
13337 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2KHR, "vkGetBufferMemoryRequirements2KHR");
13338 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2KHR, "vkGetImageMemoryRequirements2KHR");
13339 }
13340#endif
13341
13342#if VMA_BIND_MEMORY2
13343 if(m_UseKhrBindMemory2)
13344 {
13345 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2KHR, "vkBindBufferMemory2KHR");
13346 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2KHR, "vkBindImageMemory2KHR");
13347 }
13348#endif // #if VMA_BIND_MEMORY2
13349
13350#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
13351 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13352 {
13353 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2");
13354 }
13355 else if(m_UseExtMemoryBudget)
13356 {
13357 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2KHR");
13358 }
13359#endif // #if VMA_MEMORY_BUDGET
13360
13361#if VMA_VULKAN_VERSION >= 1003000
13362 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
13363 {
13364 VMA_FETCH_DEVICE_FUNC(vkGetDeviceBufferMemoryRequirements, PFN_vkGetDeviceBufferMemoryRequirements, "vkGetDeviceBufferMemoryRequirements");
13365 VMA_FETCH_DEVICE_FUNC(vkGetDeviceImageMemoryRequirements, PFN_vkGetDeviceImageMemoryRequirements, "vkGetDeviceImageMemoryRequirements");
13366 }
13367#endif
13368#if VMA_KHR_MAINTENANCE4
13369 if(m_UseKhrMaintenance4)
13370 {
13371 VMA_FETCH_DEVICE_FUNC(vkGetDeviceBufferMemoryRequirements, PFN_vkGetDeviceBufferMemoryRequirementsKHR, "vkGetDeviceBufferMemoryRequirementsKHR");
13372 VMA_FETCH_DEVICE_FUNC(vkGetDeviceImageMemoryRequirements, PFN_vkGetDeviceImageMemoryRequirementsKHR, "vkGetDeviceImageMemoryRequirementsKHR");
13373 }
13374#endif
13375#if VMA_EXTERNAL_MEMORY_WIN32
13376 if (m_UseKhrExternalMemoryWin32)
13377 {
13378 VMA_FETCH_DEVICE_FUNC(vkGetMemoryWin32HandleKHR, PFN_vkGetMemoryWin32HandleKHR, "vkGetMemoryWin32HandleKHR");
13379 }
13380#endif
13381#undef VMA_FETCH_DEVICE_FUNC
13382#undef VMA_FETCH_INSTANCE_FUNC
13383}
13384
13385#endif // VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
13386
13387void VmaAllocator_T::ValidateVulkanFunctions()
13388{
13389 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
13390 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
13391 VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
13392 VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
13393 VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
13394 VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
13395 VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
13396 VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
13397 VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
13398 VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
13399 VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
13400 VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
13401 VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
13402 VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
13403 VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
13404 VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
13405 VMA_ASSERT(m_VulkanFunctions.vkCmdCopyBuffer != VMA_NULL);
13406
13407#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13408 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrDedicatedAllocation)
13409 {
13410 VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
13411 VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
13412 }
13413#endif
13414
13415#if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
13416 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrBindMemory2)
13417 {
13418 VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL);
13419 VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL);
13420 }
13421#endif
13422
13423#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
13424 if(m_UseExtMemoryBudget || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13425 {
13426 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR != VMA_NULL);
13427 }
13428#endif
13429#if VMA_EXTERNAL_MEMORY_WIN32
13430 if (m_UseKhrExternalMemoryWin32)
13431 {
13432 VMA_ASSERT(m_VulkanFunctions.vkGetMemoryWin32HandleKHR != VMA_NULL);
13433 }
13434#endif
13435
13436 // Not validating these due to suspected driver bugs with these function
13437 // pointers being null despite correct extension or Vulkan version is enabled.
13438 // See issue #397. Their usage in VMA is optional anyway.
13439 //
13440 // VMA_ASSERT(m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements != VMA_NULL);
13441 // VMA_ASSERT(m_VulkanFunctions.vkGetDeviceImageMemoryRequirements != VMA_NULL);
13442}
13443
13444VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
13445{
13446 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
13447 const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
13448 const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
13449 return VmaAlignUp(val: isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize, alignment: (VkDeviceSize)32);
13450}
13451
13452VkResult VmaAllocator_T::AllocateMemoryOfType(
13453 VmaPool pool,
13454 VkDeviceSize size,
13455 VkDeviceSize alignment,
13456 bool dedicatedPreferred,
13457 VkBuffer dedicatedBuffer,
13458 VkImage dedicatedImage,
13459 VmaBufferImageUsage dedicatedBufferImageUsage,
13460 const VmaAllocationCreateInfo& createInfo,
13461 uint32_t memTypeIndex,
13462 VmaSuballocationType suballocType,
13463 VmaDedicatedAllocationList& dedicatedAllocations,
13464 VmaBlockVector& blockVector,
13465 size_t allocationCount,
13466 VmaAllocation* pAllocations)
13467{
13468 VMA_ASSERT(pAllocations != VMA_NULL);
13469 VMA_DEBUG_LOG_FORMAT(" AllocateMemory: MemoryTypeIndex=%" PRIu32 ", AllocationCount=%zu, Size=%" PRIu64, memTypeIndex, allocationCount, size);
13470
13471 VmaAllocationCreateInfo finalCreateInfo = createInfo;
13472 VkResult res = CalcMemTypeParams(
13473 outCreateInfo&: finalCreateInfo,
13474 memTypeIndex,
13475 size,
13476 allocationCount);
13477 if(res != VK_SUCCESS)
13478 return res;
13479
13480 if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
13481 {
13482 return AllocateDedicatedMemory(
13483 pool,
13484 size,
13485 suballocType,
13486 dedicatedAllocations,
13487 memTypeIndex,
13488 map: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
13489 isUserDataString: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
13490 isMappingAllowed: (finalCreateInfo.flags &
13491 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
13492 canAliasMemory: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
13493 pUserData: finalCreateInfo.pUserData,
13494 priority: finalCreateInfo.priority,
13495 dedicatedBuffer,
13496 dedicatedImage,
13497 dedicatedBufferImageUsage,
13498 allocationCount,
13499 pAllocations,
13500 pNextChain: blockVector.GetAllocationNextPtr());
13501 }
13502 else
13503 {
13504 const bool canAllocateDedicated =
13505 (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
13506 (pool == VK_NULL_HANDLE || !blockVector.HasExplicitBlockSize());
13507
13508 if(canAllocateDedicated)
13509 {
13510 // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
13511 if(size > blockVector.GetPreferredBlockSize() / 2)
13512 {
13513 dedicatedPreferred = true;
13514 }
13515 // Protection against creating each allocation as dedicated when we reach or exceed heap size/budget,
13516 // which can quickly deplete maxMemoryAllocationCount: Don't prefer dedicated allocations when above
13517 // 3/4 of the maximum allocation count.
13518 if(m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount < UINT32_MAX / 4 &&
13519 m_DeviceMemoryCount.load() > m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount * 3 / 4)
13520 {
13521 dedicatedPreferred = false;
13522 }
13523
13524 if(dedicatedPreferred)
13525 {
13526 res = AllocateDedicatedMemory(
13527 pool,
13528 size,
13529 suballocType,
13530 dedicatedAllocations,
13531 memTypeIndex,
13532 map: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
13533 isUserDataString: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
13534 isMappingAllowed: (finalCreateInfo.flags &
13535 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
13536 canAliasMemory: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
13537 pUserData: finalCreateInfo.pUserData,
13538 priority: finalCreateInfo.priority,
13539 dedicatedBuffer,
13540 dedicatedImage,
13541 dedicatedBufferImageUsage,
13542 allocationCount,
13543 pAllocations,
13544 pNextChain: blockVector.GetAllocationNextPtr());
13545 if(res == VK_SUCCESS)
13546 {
13547 // Succeeded: AllocateDedicatedMemory function already filled pMemory, nothing more to do here.
13548 VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
13549 return VK_SUCCESS;
13550 }
13551 }
13552 }
13553
13554 res = blockVector.Allocate(
13555 size,
13556 alignment,
13557 createInfo: finalCreateInfo,
13558 suballocType,
13559 allocationCount,
13560 pAllocations);
13561 if(res == VK_SUCCESS)
13562 return VK_SUCCESS;
13563
13564 // Try dedicated memory.
13565 if(canAllocateDedicated && !dedicatedPreferred)
13566 {
13567 res = AllocateDedicatedMemory(
13568 pool,
13569 size,
13570 suballocType,
13571 dedicatedAllocations,
13572 memTypeIndex,
13573 map: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
13574 isUserDataString: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
13575 isMappingAllowed: (finalCreateInfo.flags &
13576 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
13577 canAliasMemory: (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
13578 pUserData: finalCreateInfo.pUserData,
13579 priority: finalCreateInfo.priority,
13580 dedicatedBuffer,
13581 dedicatedImage,
13582 dedicatedBufferImageUsage,
13583 allocationCount,
13584 pAllocations,
13585 pNextChain: blockVector.GetAllocationNextPtr());
13586 if(res == VK_SUCCESS)
13587 {
13588 // Succeeded: AllocateDedicatedMemory function already filled pMemory, nothing more to do here.
13589 VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
13590 return VK_SUCCESS;
13591 }
13592 }
13593 // Everything failed: Return error code.
13594 VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
13595 return res;
13596 }
13597}
13598
13599VkResult VmaAllocator_T::AllocateDedicatedMemory(
13600 VmaPool pool,
13601 VkDeviceSize size,
13602 VmaSuballocationType suballocType,
13603 VmaDedicatedAllocationList& dedicatedAllocations,
13604 uint32_t memTypeIndex,
13605 bool map,
13606 bool isUserDataString,
13607 bool isMappingAllowed,
13608 bool canAliasMemory,
13609 void* pUserData,
13610 float priority,
13611 VkBuffer dedicatedBuffer,
13612 VkImage dedicatedImage,
13613 VmaBufferImageUsage dedicatedBufferImageUsage,
13614 size_t allocationCount,
13615 VmaAllocation* pAllocations,
13616 const void* pNextChain)
13617{
13618 VMA_ASSERT(allocationCount > 0 && pAllocations);
13619
13620 VkMemoryAllocateInfo allocInfo = { .sType: VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
13621 allocInfo.memoryTypeIndex = memTypeIndex;
13622 allocInfo.allocationSize = size;
13623 allocInfo.pNext = pNextChain;
13624
13625#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13626 VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { .sType: VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
13627 if(!canAliasMemory)
13628 {
13629 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13630 {
13631 if(dedicatedBuffer != VK_NULL_HANDLE)
13632 {
13633 VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
13634 dedicatedAllocInfo.buffer = dedicatedBuffer;
13635 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &dedicatedAllocInfo);
13636 }
13637 else if(dedicatedImage != VK_NULL_HANDLE)
13638 {
13639 dedicatedAllocInfo.image = dedicatedImage;
13640 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &dedicatedAllocInfo);
13641 }
13642 }
13643 }
13644#endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13645
13646#if VMA_BUFFER_DEVICE_ADDRESS
13647 VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { .sType: VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
13648 if(m_UseKhrBufferDeviceAddress)
13649 {
13650 bool canContainBufferWithDeviceAddress = true;
13651 if(dedicatedBuffer != VK_NULL_HANDLE)
13652 {
13653 canContainBufferWithDeviceAddress = dedicatedBufferImageUsage == VmaBufferImageUsage::UNKNOWN ||
13654 dedicatedBufferImageUsage.Contains(flag: VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_EXT);
13655 }
13656 else if(dedicatedImage != VK_NULL_HANDLE)
13657 {
13658 canContainBufferWithDeviceAddress = false;
13659 }
13660 if(canContainBufferWithDeviceAddress)
13661 {
13662 allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
13663 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &allocFlagsInfo);
13664 }
13665 }
13666#endif // #if VMA_BUFFER_DEVICE_ADDRESS
13667
13668#if VMA_MEMORY_PRIORITY
13669 VkMemoryPriorityAllocateInfoEXT priorityInfo = { .sType: VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
13670 if(m_UseExtMemoryPriority)
13671 {
13672 VMA_ASSERT(priority >= 0.f && priority <= 1.f);
13673 priorityInfo.priority = priority;
13674 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &priorityInfo);
13675 }
13676#endif // #if VMA_MEMORY_PRIORITY
13677
13678#if VMA_EXTERNAL_MEMORY
13679 // Attach VkExportMemoryAllocateInfoKHR if necessary.
13680 VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { .sType: VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
13681 exportMemoryAllocInfo.handleTypes = GetExternalMemoryHandleTypeFlags(memTypeIndex);
13682 if(exportMemoryAllocInfo.handleTypes != 0)
13683 {
13684 VmaPnextChainPushFront(mainStruct: &allocInfo, newStruct: &exportMemoryAllocInfo);
13685 }
13686#endif // #if VMA_EXTERNAL_MEMORY
13687
13688 size_t allocIndex;
13689 VkResult res = VK_SUCCESS;
13690 for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
13691 {
13692 res = AllocateDedicatedMemoryPage(
13693 pool,
13694 size,
13695 suballocType,
13696 memTypeIndex,
13697 allocInfo,
13698 map,
13699 isUserDataString,
13700 isMappingAllowed,
13701 pUserData,
13702 pAllocation: pAllocations + allocIndex);
13703 if(res != VK_SUCCESS)
13704 {
13705 break;
13706 }
13707 }
13708
13709 if(res == VK_SUCCESS)
13710 {
13711 for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
13712 {
13713 dedicatedAllocations.Register(alloc: pAllocations[allocIndex]);
13714 }
13715 VMA_DEBUG_LOG_FORMAT(" Allocated DedicatedMemory Count=%zu, MemoryTypeIndex=#%" PRIu32, allocationCount, memTypeIndex);
13716 }
13717 else
13718 {
13719 // Free all already created allocations.
13720 while(allocIndex--)
13721 {
13722 VmaAllocation currAlloc = pAllocations[allocIndex];
13723 VkDeviceMemory hMemory = currAlloc->GetMemory();
13724
13725 /*
13726 There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
13727 before vkFreeMemory.
13728
13729 if(currAlloc->GetMappedData() != VMA_NULL)
13730 {
13731 (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
13732 }
13733 */
13734
13735 FreeVulkanMemory(memoryType: memTypeIndex, size: currAlloc->GetSize(), hMemory);
13736 m_Budget.RemoveAllocation(heapIndex: MemoryTypeIndexToHeapIndex(memTypeIndex), allocationSize: currAlloc->GetSize());
13737 m_AllocationObjectAllocator.Free(hAlloc: currAlloc);
13738 }
13739
13740 memset(s: pAllocations, c: 0, n: sizeof(VmaAllocation) * allocationCount);
13741 }
13742
13743 return res;
13744}
13745
13746VkResult VmaAllocator_T::AllocateDedicatedMemoryPage(
13747 VmaPool pool,
13748 VkDeviceSize size,
13749 VmaSuballocationType suballocType,
13750 uint32_t memTypeIndex,
13751 const VkMemoryAllocateInfo& allocInfo,
13752 bool map,
13753 bool isUserDataString,
13754 bool isMappingAllowed,
13755 void* pUserData,
13756 VmaAllocation* pAllocation)
13757{
13758 VkDeviceMemory hMemory = VK_NULL_HANDLE;
13759 VkResult res = AllocateVulkanMemory(pAllocateInfo: &allocInfo, pMemory: &hMemory);
13760 if(res < 0)
13761 {
13762 VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
13763 return res;
13764 }
13765
13766 void* pMappedData = VMA_NULL;
13767 if(map)
13768 {
13769 res = (*m_VulkanFunctions.vkMapMemory)(
13770 m_hDevice,
13771 hMemory,
13772 0,
13773 VK_WHOLE_SIZE,
13774 0,
13775 &pMappedData);
13776 if(res < 0)
13777 {
13778 VMA_DEBUG_LOG(" vkMapMemory FAILED");
13779 FreeVulkanMemory(memoryType: memTypeIndex, size, hMemory);
13780 return res;
13781 }
13782 }
13783
13784 *pAllocation = m_AllocationObjectAllocator.Allocate(args&: isMappingAllowed);
13785 (*pAllocation)->InitDedicatedAllocation(allocator: this, hParentPool: pool, memoryTypeIndex: memTypeIndex, hMemory, suballocationType: suballocType, pMappedData, size);
13786 if (isUserDataString)
13787 (*pAllocation)->SetName(hAllocator: this, pName: (const char*)pUserData);
13788 else
13789 (*pAllocation)->SetUserData(hAllocator: this, pUserData);
13790 m_Budget.AddAllocation(heapIndex: MemoryTypeIndexToHeapIndex(memTypeIndex), allocationSize: size);
13791 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
13792 {
13793 FillAllocation(hAllocation: *pAllocation, pattern: VMA_ALLOCATION_FILL_PATTERN_CREATED);
13794 }
13795
13796 return VK_SUCCESS;
13797}
13798
13799void VmaAllocator_T::GetBufferMemoryRequirements(
13800 VkBuffer hBuffer,
13801 VkMemoryRequirements& memReq,
13802 bool& requiresDedicatedAllocation,
13803 bool& prefersDedicatedAllocation) const
13804{
13805#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13806 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13807 {
13808 VkBufferMemoryRequirementsInfo2KHR memReqInfo = { .sType: VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
13809 memReqInfo.buffer = hBuffer;
13810
13811 VkMemoryDedicatedRequirementsKHR memDedicatedReq = { .sType: VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
13812
13813 VkMemoryRequirements2KHR memReq2 = { .sType: VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
13814 VmaPnextChainPushFront(mainStruct: &memReq2, newStruct: &memDedicatedReq);
13815
13816 (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
13817
13818 memReq = memReq2.memoryRequirements;
13819 requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
13820 prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
13821 }
13822 else
13823#endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13824 {
13825 (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
13826 requiresDedicatedAllocation = false;
13827 prefersDedicatedAllocation = false;
13828 }
13829}
13830
13831void VmaAllocator_T::GetImageMemoryRequirements(
13832 VkImage hImage,
13833 VkMemoryRequirements& memReq,
13834 bool& requiresDedicatedAllocation,
13835 bool& prefersDedicatedAllocation) const
13836{
13837#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13838 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
13839 {
13840 VkImageMemoryRequirementsInfo2KHR memReqInfo = { .sType: VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
13841 memReqInfo.image = hImage;
13842
13843 VkMemoryDedicatedRequirementsKHR memDedicatedReq = { .sType: VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
13844
13845 VkMemoryRequirements2KHR memReq2 = { .sType: VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
13846 VmaPnextChainPushFront(mainStruct: &memReq2, newStruct: &memDedicatedReq);
13847
13848 (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
13849
13850 memReq = memReq2.memoryRequirements;
13851 requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
13852 prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
13853 }
13854 else
13855#endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
13856 {
13857 (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
13858 requiresDedicatedAllocation = false;
13859 prefersDedicatedAllocation = false;
13860 }
13861}
13862
13863VkResult VmaAllocator_T::FindMemoryTypeIndex(
13864 uint32_t memoryTypeBits,
13865 const VmaAllocationCreateInfo* pAllocationCreateInfo,
13866 VmaBufferImageUsage bufImgUsage,
13867 uint32_t* pMemoryTypeIndex) const
13868{
13869 memoryTypeBits &= GetGlobalMemoryTypeBits();
13870
13871 if(pAllocationCreateInfo->memoryTypeBits != 0)
13872 {
13873 memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
13874 }
13875
13876 VkMemoryPropertyFlags requiredFlags = 0, preferredFlags = 0, notPreferredFlags = 0;
13877 if(!FindMemoryPreferences(
13878 isIntegratedGPU: IsIntegratedGpu(),
13879 allocCreateInfo: *pAllocationCreateInfo,
13880 bufImgUsage,
13881 outRequiredFlags&: requiredFlags, outPreferredFlags&: preferredFlags, outNotPreferredFlags&: notPreferredFlags))
13882 {
13883 return VK_ERROR_FEATURE_NOT_PRESENT;
13884 }
13885
13886 *pMemoryTypeIndex = UINT32_MAX;
13887 uint32_t minCost = UINT32_MAX;
13888 for(uint32_t memTypeIndex = 0, memTypeBit = 1;
13889 memTypeIndex < GetMemoryTypeCount();
13890 ++memTypeIndex, memTypeBit <<= 1)
13891 {
13892 // This memory type is acceptable according to memoryTypeBits bitmask.
13893 if((memTypeBit & memoryTypeBits) != 0)
13894 {
13895 const VkMemoryPropertyFlags currFlags =
13896 m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
13897 // This memory type contains requiredFlags.
13898 if((requiredFlags & ~currFlags) == 0)
13899 {
13900 // Calculate cost as number of bits from preferredFlags not present in this memory type.
13901 uint32_t currCost = VMA_COUNT_BITS_SET(preferredFlags & ~currFlags) +
13902 VMA_COUNT_BITS_SET(currFlags & notPreferredFlags);
13903 // Remember memory type with lowest cost.
13904 if(currCost < minCost)
13905 {
13906 *pMemoryTypeIndex = memTypeIndex;
13907 if(currCost == 0)
13908 {
13909 return VK_SUCCESS;
13910 }
13911 minCost = currCost;
13912 }
13913 }
13914 }
13915 }
13916 return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
13917}
13918
13919VkResult VmaAllocator_T::CalcMemTypeParams(
13920 VmaAllocationCreateInfo& inoutCreateInfo,
13921 uint32_t memTypeIndex,
13922 VkDeviceSize size,
13923 size_t allocationCount)
13924{
13925 // If memory type is not HOST_VISIBLE, disable MAPPED.
13926 if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
13927 (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
13928 {
13929 inoutCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
13930 }
13931
13932 if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
13933 (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT) != 0)
13934 {
13935 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
13936 VmaBudget heapBudget = {};
13937 GetHeapBudgets(outBudgets: &heapBudget, firstHeap: heapIndex, heapCount: 1);
13938 if(heapBudget.usage + size * allocationCount > heapBudget.budget)
13939 {
13940 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
13941 }
13942 }
13943 return VK_SUCCESS;
13944}
13945
13946VkResult VmaAllocator_T::CalcAllocationParams(
13947 VmaAllocationCreateInfo& inoutCreateInfo,
13948 bool dedicatedRequired,
13949 bool dedicatedPreferred)
13950{
13951 VMA_ASSERT((inoutCreateInfo.flags &
13952 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) !=
13953 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) &&
13954 "Specifying both flags VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT and VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT is incorrect.");
13955 VMA_ASSERT((((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) == 0 ||
13956 (inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0)) &&
13957 "Specifying VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT requires also VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
13958 if(inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
13959 {
13960 if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0)
13961 {
13962 VMA_ASSERT((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0 &&
13963 "When using VMA_ALLOCATION_CREATE_MAPPED_BIT and usage = VMA_MEMORY_USAGE_AUTO*, you must also specify VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
13964 }
13965 }
13966
13967 // If memory is lazily allocated, it should be always dedicated.
13968 if(dedicatedRequired ||
13969 inoutCreateInfo.usage == VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED)
13970 {
13971 inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
13972 }
13973
13974 if(inoutCreateInfo.pool != VK_NULL_HANDLE)
13975 {
13976 if(inoutCreateInfo.pool->m_BlockVector.HasExplicitBlockSize() &&
13977 (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
13978 {
13979 VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT while current custom pool doesn't support dedicated allocations.");
13980 return VK_ERROR_FEATURE_NOT_PRESENT;
13981 }
13982 inoutCreateInfo.priority = inoutCreateInfo.pool->m_BlockVector.GetPriority();
13983 }
13984
13985 if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
13986 (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
13987 {
13988 VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
13989 return VK_ERROR_FEATURE_NOT_PRESENT;
13990 }
13991
13992 if(VMA_DEBUG_ALWAYS_DEDICATED_MEMORY &&
13993 (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
13994 {
13995 inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
13996 }
13997
13998 // Non-auto USAGE values imply HOST_ACCESS flags.
13999 // And so does VMA_MEMORY_USAGE_UNKNOWN because it is used with custom pools.
14000 // Which specific flag is used doesn't matter. They change things only when used with VMA_MEMORY_USAGE_AUTO*.
14001 // Otherwise they just protect from assert on mapping.
14002 if(inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO &&
14003 inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE &&
14004 inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
14005 {
14006 if((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) == 0)
14007 {
14008 inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
14009 }
14010 }
14011
14012 return VK_SUCCESS;
14013}
14014
14015VkResult VmaAllocator_T::AllocateMemory(
14016 const VkMemoryRequirements& vkMemReq,
14017 bool requiresDedicatedAllocation,
14018 bool prefersDedicatedAllocation,
14019 VkBuffer dedicatedBuffer,
14020 VkImage dedicatedImage,
14021 VmaBufferImageUsage dedicatedBufferImageUsage,
14022 const VmaAllocationCreateInfo& createInfo,
14023 VmaSuballocationType suballocType,
14024 size_t allocationCount,
14025 VmaAllocation* pAllocations)
14026{
14027 memset(s: pAllocations, c: 0, n: sizeof(VmaAllocation) * allocationCount);
14028
14029 VMA_ASSERT(VmaIsPow2(vkMemReq.alignment));
14030
14031 if(vkMemReq.size == 0)
14032 {
14033 return VK_ERROR_INITIALIZATION_FAILED;
14034 }
14035
14036 VmaAllocationCreateInfo createInfoFinal = createInfo;
14037 VkResult res = CalcAllocationParams(inoutCreateInfo&: createInfoFinal, dedicatedRequired: requiresDedicatedAllocation, dedicatedPreferred: prefersDedicatedAllocation);
14038 if(res != VK_SUCCESS)
14039 return res;
14040
14041 if(createInfoFinal.pool != VK_NULL_HANDLE)
14042 {
14043 VmaBlockVector& blockVector = createInfoFinal.pool->m_BlockVector;
14044 return AllocateMemoryOfType(
14045 pool: createInfoFinal.pool,
14046 size: vkMemReq.size,
14047 alignment: vkMemReq.alignment,
14048 dedicatedPreferred: prefersDedicatedAllocation,
14049 dedicatedBuffer,
14050 dedicatedImage,
14051 dedicatedBufferImageUsage,
14052 createInfo: createInfoFinal,
14053 memTypeIndex: blockVector.GetMemoryTypeIndex(),
14054 suballocType,
14055 dedicatedAllocations&: createInfoFinal.pool->m_DedicatedAllocations,
14056 blockVector,
14057 allocationCount,
14058 pAllocations);
14059 }
14060 else
14061 {
14062 // Bit mask of memory Vulkan types acceptable for this allocation.
14063 uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
14064 uint32_t memTypeIndex = UINT32_MAX;
14065 res = FindMemoryTypeIndex(memoryTypeBits, pAllocationCreateInfo: &createInfoFinal, bufImgUsage: dedicatedBufferImageUsage, pMemoryTypeIndex: &memTypeIndex);
14066 // Can't find any single memory type matching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
14067 if(res != VK_SUCCESS)
14068 return res;
14069 do
14070 {
14071 VmaBlockVector* blockVector = m_pBlockVectors[memTypeIndex];
14072 VMA_ASSERT(blockVector && "Trying to use unsupported memory type!");
14073 res = AllocateMemoryOfType(
14074 VK_NULL_HANDLE,
14075 size: vkMemReq.size,
14076 alignment: vkMemReq.alignment,
14077 dedicatedPreferred: requiresDedicatedAllocation || prefersDedicatedAllocation,
14078 dedicatedBuffer,
14079 dedicatedImage,
14080 dedicatedBufferImageUsage,
14081 createInfo: createInfoFinal,
14082 memTypeIndex,
14083 suballocType,
14084 dedicatedAllocations&: m_DedicatedAllocations[memTypeIndex],
14085 blockVector&: *blockVector,
14086 allocationCount,
14087 pAllocations);
14088 // Allocation succeeded
14089 if(res == VK_SUCCESS)
14090 return VK_SUCCESS;
14091
14092 // Remove old memTypeIndex from list of possibilities.
14093 memoryTypeBits &= ~(1u << memTypeIndex);
14094 // Find alternative memTypeIndex.
14095 res = FindMemoryTypeIndex(memoryTypeBits, pAllocationCreateInfo: &createInfoFinal, bufImgUsage: dedicatedBufferImageUsage, pMemoryTypeIndex: &memTypeIndex);
14096 } while(res == VK_SUCCESS);
14097
14098 // No other matching memory type index could be found.
14099 // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
14100 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
14101 }
14102}
14103
14104void VmaAllocator_T::FreeMemory(
14105 size_t allocationCount,
14106 const VmaAllocation* pAllocations)
14107{
14108 VMA_ASSERT(pAllocations);
14109
14110 for(size_t allocIndex = allocationCount; allocIndex--; )
14111 {
14112 VmaAllocation allocation = pAllocations[allocIndex];
14113
14114 if(allocation != VK_NULL_HANDLE)
14115 {
14116 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
14117 {
14118 FillAllocation(hAllocation: allocation, pattern: VMA_ALLOCATION_FILL_PATTERN_DESTROYED);
14119 }
14120
14121 switch(allocation->GetType())
14122 {
14123 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
14124 {
14125 VmaBlockVector* pBlockVector = VMA_NULL;
14126 VmaPool hPool = allocation->GetParentPool();
14127 if(hPool != VK_NULL_HANDLE)
14128 {
14129 pBlockVector = &hPool->m_BlockVector;
14130 }
14131 else
14132 {
14133 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
14134 pBlockVector = m_pBlockVectors[memTypeIndex];
14135 VMA_ASSERT(pBlockVector && "Trying to free memory of unsupported type!");
14136 }
14137 pBlockVector->Free(hAllocation: allocation);
14138 }
14139 break;
14140 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
14141 FreeDedicatedMemory(allocation);
14142 break;
14143 default:
14144 VMA_ASSERT(0);
14145 }
14146 }
14147 }
14148}
14149
14150void VmaAllocator_T::CalculateStatistics(VmaTotalStatistics* pStats)
14151{
14152 // Initialize.
14153 VmaClearDetailedStatistics(outStats&: pStats->total);
14154 for(uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
14155 VmaClearDetailedStatistics(outStats&: pStats->memoryType[i]);
14156 for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
14157 VmaClearDetailedStatistics(outStats&: pStats->memoryHeap[i]);
14158
14159 // Process default pools.
14160 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
14161 {
14162 VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
14163 if (pBlockVector != VMA_NULL)
14164 pBlockVector->AddDetailedStatistics(inoutStats&: pStats->memoryType[memTypeIndex]);
14165 }
14166
14167 // Process custom pools.
14168 {
14169 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
14170 for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(item: pool))
14171 {
14172 VmaBlockVector& blockVector = pool->m_BlockVector;
14173 const uint32_t memTypeIndex = blockVector.GetMemoryTypeIndex();
14174 blockVector.AddDetailedStatistics(inoutStats&: pStats->memoryType[memTypeIndex]);
14175 pool->m_DedicatedAllocations.AddDetailedStatistics(inoutStats&: pStats->memoryType[memTypeIndex]);
14176 }
14177 }
14178
14179 // Process dedicated allocations.
14180 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
14181 {
14182 m_DedicatedAllocations[memTypeIndex].AddDetailedStatistics(inoutStats&: pStats->memoryType[memTypeIndex]);
14183 }
14184
14185 // Sum from memory types to memory heaps.
14186 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
14187 {
14188 const uint32_t memHeapIndex = m_MemProps.memoryTypes[memTypeIndex].heapIndex;
14189 VmaAddDetailedStatistics(inoutStats&: pStats->memoryHeap[memHeapIndex], src: pStats->memoryType[memTypeIndex]);
14190 }
14191
14192 // Sum from memory heaps to total.
14193 for(uint32_t memHeapIndex = 0; memHeapIndex < GetMemoryHeapCount(); ++memHeapIndex)
14194 VmaAddDetailedStatistics(inoutStats&: pStats->total, src: pStats->memoryHeap[memHeapIndex]);
14195
14196 VMA_ASSERT(pStats->total.statistics.allocationCount == 0 ||
14197 pStats->total.allocationSizeMax >= pStats->total.allocationSizeMin);
14198 VMA_ASSERT(pStats->total.unusedRangeCount == 0 ||
14199 pStats->total.unusedRangeSizeMax >= pStats->total.unusedRangeSizeMin);
14200}
14201
14202void VmaAllocator_T::GetHeapBudgets(VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount)
14203{
14204#if VMA_MEMORY_BUDGET
14205 if(m_UseExtMemoryBudget)
14206 {
14207 if(m_Budget.m_OperationsSinceBudgetFetch < 30)
14208 {
14209 VmaMutexLockRead lockRead(m_Budget.m_BudgetMutex, m_UseMutex);
14210 for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
14211 {
14212 const uint32_t heapIndex = firstHeap + i;
14213
14214 outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
14215 outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
14216 outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
14217 outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
14218
14219 if(m_Budget.m_VulkanUsage[heapIndex] + outBudgets->statistics.blockBytes > m_Budget.m_BlockBytesAtBudgetFetch[heapIndex])
14220 {
14221 outBudgets->usage = m_Budget.m_VulkanUsage[heapIndex] +
14222 outBudgets->statistics.blockBytes - m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
14223 }
14224 else
14225 {
14226 outBudgets->usage = 0;
14227 }
14228
14229 // Have to take MIN with heap size because explicit HeapSizeLimit is included in it.
14230 outBudgets->budget = VMA_MIN(
14231 m_Budget.m_VulkanBudget[heapIndex], m_MemProps.memoryHeaps[heapIndex].size);
14232 }
14233 }
14234 else
14235 {
14236 UpdateVulkanBudget(); // Outside of mutex lock
14237 GetHeapBudgets(outBudgets, firstHeap, heapCount); // Recursion
14238 }
14239 }
14240 else
14241#endif
14242 {
14243 for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
14244 {
14245 const uint32_t heapIndex = firstHeap + i;
14246
14247 outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
14248 outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
14249 outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
14250 outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
14251
14252 outBudgets->usage = outBudgets->statistics.blockBytes;
14253 outBudgets->budget = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
14254 }
14255 }
14256}
14257
14258void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
14259{
14260 pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
14261 pAllocationInfo->deviceMemory = hAllocation->GetMemory();
14262 pAllocationInfo->offset = hAllocation->GetOffset();
14263 pAllocationInfo->size = hAllocation->GetSize();
14264 pAllocationInfo->pMappedData = hAllocation->GetMappedData();
14265 pAllocationInfo->pUserData = hAllocation->GetUserData();
14266 pAllocationInfo->pName = hAllocation->GetName();
14267}
14268
14269void VmaAllocator_T::GetAllocationInfo2(VmaAllocation hAllocation, VmaAllocationInfo2* pAllocationInfo)
14270{
14271 GetAllocationInfo(hAllocation, pAllocationInfo: &pAllocationInfo->allocationInfo);
14272
14273 switch (hAllocation->GetType())
14274 {
14275 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
14276 pAllocationInfo->blockSize = hAllocation->GetBlock()->m_pMetadata->GetSize();
14277 pAllocationInfo->dedicatedMemory = VK_FALSE;
14278 break;
14279 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
14280 pAllocationInfo->blockSize = pAllocationInfo->allocationInfo.size;
14281 pAllocationInfo->dedicatedMemory = VK_TRUE;
14282 break;
14283 default:
14284 VMA_ASSERT(0);
14285 }
14286}
14287
14288VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
14289{
14290 VMA_DEBUG_LOG_FORMAT(" CreatePool: MemoryTypeIndex=%" PRIu32 ", flags=%" PRIu32, pCreateInfo->memoryTypeIndex, pCreateInfo->flags);
14291
14292 VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
14293
14294 // Protection against uninitialized new structure member. If garbage data are left there, this pointer dereference would crash.
14295 if(pCreateInfo->pMemoryAllocateNext)
14296 {
14297 VMA_ASSERT(((const VkBaseInStructure*)pCreateInfo->pMemoryAllocateNext)->sType != 0);
14298 }
14299
14300 if(newCreateInfo.maxBlockCount == 0)
14301 {
14302 newCreateInfo.maxBlockCount = SIZE_MAX;
14303 }
14304 if(newCreateInfo.minBlockCount > newCreateInfo.maxBlockCount)
14305 {
14306 return VK_ERROR_INITIALIZATION_FAILED;
14307 }
14308 // Memory type index out of range or forbidden.
14309 if(pCreateInfo->memoryTypeIndex >= GetMemoryTypeCount() ||
14310 ((1u << pCreateInfo->memoryTypeIndex) & m_GlobalMemoryTypeBits) == 0)
14311 {
14312 return VK_ERROR_FEATURE_NOT_PRESENT;
14313 }
14314 if(newCreateInfo.minAllocationAlignment > 0)
14315 {
14316 VMA_ASSERT(VmaIsPow2(newCreateInfo.minAllocationAlignment));
14317 }
14318
14319 const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex: newCreateInfo.memoryTypeIndex);
14320
14321 *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo, preferredBlockSize);
14322
14323 VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
14324 if(res != VK_SUCCESS)
14325 {
14326 vma_delete(hAllocator: this, ptr: *pPool);
14327 *pPool = VMA_NULL;
14328 return res;
14329 }
14330
14331 // Add to m_Pools.
14332 {
14333 VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
14334 (*pPool)->SetId(m_NextPoolId++);
14335 m_Pools.PushBack(item: *pPool);
14336 }
14337
14338 return VK_SUCCESS;
14339}
14340
14341void VmaAllocator_T::DestroyPool(VmaPool pool)
14342{
14343 // Remove from m_Pools.
14344 {
14345 VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
14346 m_Pools.Remove(item: pool);
14347 }
14348
14349 vma_delete(hAllocator: this, ptr: pool);
14350}
14351
14352void VmaAllocator_T::GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats)
14353{
14354 VmaClearStatistics(outStats&: *pPoolStats);
14355 pool->m_BlockVector.AddStatistics(inoutStats&: *pPoolStats);
14356 pool->m_DedicatedAllocations.AddStatistics(inoutStats&: *pPoolStats);
14357}
14358
14359void VmaAllocator_T::CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats)
14360{
14361 VmaClearDetailedStatistics(outStats&: *pPoolStats);
14362 pool->m_BlockVector.AddDetailedStatistics(inoutStats&: *pPoolStats);
14363 pool->m_DedicatedAllocations.AddDetailedStatistics(inoutStats&: *pPoolStats);
14364}
14365
14366void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
14367{
14368 m_CurrentFrameIndex.store(i: frameIndex);
14369
14370#if VMA_MEMORY_BUDGET
14371 if(m_UseExtMemoryBudget)
14372 {
14373 UpdateVulkanBudget();
14374 }
14375#endif // #if VMA_MEMORY_BUDGET
14376}
14377
14378VkResult VmaAllocator_T::CheckPoolCorruption(VmaPool hPool)
14379{
14380 return hPool->m_BlockVector.CheckCorruption();
14381}
14382
14383VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
14384{
14385 VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
14386
14387 // Process default pools.
14388 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
14389 {
14390 VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
14391 if(pBlockVector != VMA_NULL)
14392 {
14393 VkResult localRes = pBlockVector->CheckCorruption();
14394 switch(localRes)
14395 {
14396 case VK_ERROR_FEATURE_NOT_PRESENT:
14397 break;
14398 case VK_SUCCESS:
14399 finalRes = VK_SUCCESS;
14400 break;
14401 default:
14402 return localRes;
14403 }
14404 }
14405 }
14406
14407 // Process custom pools.
14408 {
14409 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
14410 for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(item: pool))
14411 {
14412 if(((1u << pool->m_BlockVector.GetMemoryTypeIndex()) & memoryTypeBits) != 0)
14413 {
14414 VkResult localRes = pool->m_BlockVector.CheckCorruption();
14415 switch(localRes)
14416 {
14417 case VK_ERROR_FEATURE_NOT_PRESENT:
14418 break;
14419 case VK_SUCCESS:
14420 finalRes = VK_SUCCESS;
14421 break;
14422 default:
14423 return localRes;
14424 }
14425 }
14426 }
14427 }
14428
14429 return finalRes;
14430}
14431
14432VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
14433{
14434 AtomicTransactionalIncrement<VMA_ATOMIC_UINT32> deviceMemoryCountIncrement;
14435 const uint64_t prevDeviceMemoryCount = deviceMemoryCountIncrement.Increment(atomic: &m_DeviceMemoryCount);
14436#if VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
14437 if(prevDeviceMemoryCount >= m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount)
14438 {
14439 return VK_ERROR_TOO_MANY_OBJECTS;
14440 }
14441#endif
14442
14443 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex: pAllocateInfo->memoryTypeIndex);
14444
14445 // HeapSizeLimit is in effect for this heap.
14446 if((m_HeapSizeLimitMask & (1u << heapIndex)) != 0)
14447 {
14448 const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
14449 VkDeviceSize blockBytes = m_Budget.m_BlockBytes[heapIndex];
14450 for(;;)
14451 {
14452 const VkDeviceSize blockBytesAfterAllocation = blockBytes + pAllocateInfo->allocationSize;
14453 if(blockBytesAfterAllocation > heapSize)
14454 {
14455 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
14456 }
14457 if(m_Budget.m_BlockBytes[heapIndex].compare_exchange_strong(i1&: blockBytes, i2: blockBytesAfterAllocation))
14458 {
14459 break;
14460 }
14461 }
14462 }
14463 else
14464 {
14465 m_Budget.m_BlockBytes[heapIndex] += pAllocateInfo->allocationSize;
14466 }
14467 ++m_Budget.m_BlockCount[heapIndex];
14468
14469 // VULKAN CALL vkAllocateMemory.
14470 VkResult res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
14471
14472 if(res == VK_SUCCESS)
14473 {
14474#if VMA_MEMORY_BUDGET
14475 ++m_Budget.m_OperationsSinceBudgetFetch;
14476#endif
14477
14478 // Informative callback.
14479 if(m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
14480 {
14481 (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize, m_DeviceMemoryCallbacks.pUserData);
14482 }
14483
14484 deviceMemoryCountIncrement.Commit();
14485 }
14486 else
14487 {
14488 --m_Budget.m_BlockCount[heapIndex];
14489 m_Budget.m_BlockBytes[heapIndex] -= pAllocateInfo->allocationSize;
14490 }
14491
14492 return res;
14493}
14494
14495void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
14496{
14497 // Informative callback.
14498 if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
14499 {
14500 (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size, m_DeviceMemoryCallbacks.pUserData);
14501 }
14502
14503 // VULKAN CALL vkFreeMemory.
14504 (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
14505
14506 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex: memoryType);
14507 --m_Budget.m_BlockCount[heapIndex];
14508 m_Budget.m_BlockBytes[heapIndex] -= size;
14509
14510 --m_DeviceMemoryCount;
14511}
14512
14513VkResult VmaAllocator_T::BindVulkanBuffer(
14514 VkDeviceMemory memory,
14515 VkDeviceSize memoryOffset,
14516 VkBuffer buffer,
14517 const void* pNext)
14518{
14519 if(pNext != VMA_NULL)
14520 {
14521#if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
14522 if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
14523 m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL)
14524 {
14525 VkBindBufferMemoryInfoKHR bindBufferMemoryInfo = { .sType: VK_STRUCTURE_TYPE_BIND_BUFFER_MEMORY_INFO_KHR };
14526 bindBufferMemoryInfo.pNext = pNext;
14527 bindBufferMemoryInfo.buffer = buffer;
14528 bindBufferMemoryInfo.memory = memory;
14529 bindBufferMemoryInfo.memoryOffset = memoryOffset;
14530 return (*m_VulkanFunctions.vkBindBufferMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
14531 }
14532 else
14533#endif // #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
14534 {
14535 return VK_ERROR_EXTENSION_NOT_PRESENT;
14536 }
14537 }
14538 else
14539 {
14540 return (*m_VulkanFunctions.vkBindBufferMemory)(m_hDevice, buffer, memory, memoryOffset);
14541 }
14542}
14543
14544VkResult VmaAllocator_T::BindVulkanImage(
14545 VkDeviceMemory memory,
14546 VkDeviceSize memoryOffset,
14547 VkImage image,
14548 const void* pNext)
14549{
14550 if(pNext != VMA_NULL)
14551 {
14552#if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
14553 if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
14554 m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL)
14555 {
14556 VkBindImageMemoryInfoKHR bindBufferMemoryInfo = { .sType: VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO_KHR };
14557 bindBufferMemoryInfo.pNext = pNext;
14558 bindBufferMemoryInfo.image = image;
14559 bindBufferMemoryInfo.memory = memory;
14560 bindBufferMemoryInfo.memoryOffset = memoryOffset;
14561 return (*m_VulkanFunctions.vkBindImageMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
14562 }
14563 else
14564#endif // #if VMA_BIND_MEMORY2
14565 {
14566 return VK_ERROR_EXTENSION_NOT_PRESENT;
14567 }
14568 }
14569 else
14570 {
14571 return (*m_VulkanFunctions.vkBindImageMemory)(m_hDevice, image, memory, memoryOffset);
14572 }
14573}
14574
14575VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
14576{
14577 switch(hAllocation->GetType())
14578 {
14579 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
14580 {
14581 VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
14582 char *pBytes = VMA_NULL;
14583 VkResult res = pBlock->Map(hAllocator: this, count: 1, ppData: (void**)&pBytes);
14584 if(res == VK_SUCCESS)
14585 {
14586 *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
14587 hAllocation->BlockAllocMap();
14588 }
14589 return res;
14590 }
14591 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
14592 return hAllocation->DedicatedAllocMap(hAllocator: this, ppData);
14593 default:
14594 VMA_ASSERT(0);
14595 return VK_ERROR_MEMORY_MAP_FAILED;
14596 }
14597}
14598
14599void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
14600{
14601 switch(hAllocation->GetType())
14602 {
14603 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
14604 {
14605 VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
14606 hAllocation->BlockAllocUnmap();
14607 pBlock->Unmap(hAllocator: this, count: 1);
14608 }
14609 break;
14610 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
14611 hAllocation->DedicatedAllocUnmap(hAllocator: this);
14612 break;
14613 default:
14614 VMA_ASSERT(0);
14615 }
14616}
14617
14618VkResult VmaAllocator_T::BindBufferMemory(
14619 VmaAllocation hAllocation,
14620 VkDeviceSize allocationLocalOffset,
14621 VkBuffer hBuffer,
14622 const void* pNext)
14623{
14624 VkResult res = VK_ERROR_UNKNOWN_COPY;
14625 switch(hAllocation->GetType())
14626 {
14627 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
14628 res = BindVulkanBuffer(memory: hAllocation->GetMemory(), memoryOffset: allocationLocalOffset, buffer: hBuffer, pNext);
14629 break;
14630 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
14631 {
14632 VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
14633 VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block.");
14634 res = pBlock->BindBufferMemory(hAllocator: this, hAllocation, allocationLocalOffset, hBuffer, pNext);
14635 break;
14636 }
14637 default:
14638 VMA_ASSERT(0);
14639 }
14640 return res;
14641}
14642
14643VkResult VmaAllocator_T::BindImageMemory(
14644 VmaAllocation hAllocation,
14645 VkDeviceSize allocationLocalOffset,
14646 VkImage hImage,
14647 const void* pNext)
14648{
14649 VkResult res = VK_ERROR_UNKNOWN_COPY;
14650 switch(hAllocation->GetType())
14651 {
14652 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
14653 res = BindVulkanImage(memory: hAllocation->GetMemory(), memoryOffset: allocationLocalOffset, image: hImage, pNext);
14654 break;
14655 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
14656 {
14657 VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
14658 VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block.");
14659 res = pBlock->BindImageMemory(hAllocator: this, hAllocation, allocationLocalOffset, hImage, pNext);
14660 break;
14661 }
14662 default:
14663 VMA_ASSERT(0);
14664 }
14665 return res;
14666}
14667
14668VkResult VmaAllocator_T::FlushOrInvalidateAllocation(
14669 VmaAllocation hAllocation,
14670 VkDeviceSize offset, VkDeviceSize size,
14671 VMA_CACHE_OPERATION op)
14672{
14673 VkResult res = VK_SUCCESS;
14674
14675 VkMappedMemoryRange memRange = {};
14676 if(GetFlushOrInvalidateRange(allocation: hAllocation, offset, size, outRange&: memRange))
14677 {
14678 switch(op)
14679 {
14680 case VMA_CACHE_FLUSH:
14681 res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
14682 break;
14683 case VMA_CACHE_INVALIDATE:
14684 res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
14685 break;
14686 default:
14687 VMA_ASSERT(0);
14688 }
14689 }
14690 // else: Just ignore this call.
14691 return res;
14692}
14693
14694VkResult VmaAllocator_T::FlushOrInvalidateAllocations(
14695 uint32_t allocationCount,
14696 const VmaAllocation* allocations,
14697 const VkDeviceSize* offsets, const VkDeviceSize* sizes,
14698 VMA_CACHE_OPERATION op)
14699{
14700 typedef VmaStlAllocator<VkMappedMemoryRange> RangeAllocator;
14701 typedef VmaSmallVector<VkMappedMemoryRange, RangeAllocator, 16> RangeVector;
14702 RangeVector ranges = RangeVector(RangeAllocator(GetAllocationCallbacks()));
14703
14704 for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
14705 {
14706 const VmaAllocation alloc = allocations[allocIndex];
14707 const VkDeviceSize offset = offsets != VMA_NULL ? offsets[allocIndex] : 0;
14708 const VkDeviceSize size = sizes != VMA_NULL ? sizes[allocIndex] : VK_WHOLE_SIZE;
14709 VkMappedMemoryRange newRange;
14710 if(GetFlushOrInvalidateRange(allocation: alloc, offset, size, outRange&: newRange))
14711 {
14712 ranges.push_back(src: newRange);
14713 }
14714 }
14715
14716 VkResult res = VK_SUCCESS;
14717 if(!ranges.empty())
14718 {
14719 switch(op)
14720 {
14721 case VMA_CACHE_FLUSH:
14722 res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
14723 break;
14724 case VMA_CACHE_INVALIDATE:
14725 res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
14726 break;
14727 default:
14728 VMA_ASSERT(0);
14729 }
14730 }
14731 // else: Just ignore this call.
14732 return res;
14733}
14734
14735VkResult VmaAllocator_T::CopyMemoryToAllocation(
14736 const void* pSrcHostPointer,
14737 VmaAllocation dstAllocation,
14738 VkDeviceSize dstAllocationLocalOffset,
14739 VkDeviceSize size)
14740{
14741 void* dstMappedData = VMA_NULL;
14742 VkResult res = Map(hAllocation: dstAllocation, ppData: &dstMappedData);
14743 if(res == VK_SUCCESS)
14744 {
14745 memcpy(dest: (char*)dstMappedData + dstAllocationLocalOffset, src: pSrcHostPointer, n: (size_t)size);
14746 Unmap(hAllocation: dstAllocation);
14747 res = FlushOrInvalidateAllocation(hAllocation: dstAllocation, offset: dstAllocationLocalOffset, size, op: VMA_CACHE_FLUSH);
14748 }
14749 return res;
14750}
14751
14752VkResult VmaAllocator_T::CopyAllocationToMemory(
14753 VmaAllocation srcAllocation,
14754 VkDeviceSize srcAllocationLocalOffset,
14755 void* pDstHostPointer,
14756 VkDeviceSize size)
14757{
14758 void* srcMappedData = VMA_NULL;
14759 VkResult res = Map(hAllocation: srcAllocation, ppData: &srcMappedData);
14760 if(res == VK_SUCCESS)
14761 {
14762 res = FlushOrInvalidateAllocation(hAllocation: srcAllocation, offset: srcAllocationLocalOffset, size, op: VMA_CACHE_INVALIDATE);
14763 if(res == VK_SUCCESS)
14764 {
14765 memcpy(dest: pDstHostPointer, src: (const char*)srcMappedData + srcAllocationLocalOffset, n: (size_t)size);
14766 Unmap(hAllocation: srcAllocation);
14767 }
14768 }
14769 return res;
14770}
14771
14772void VmaAllocator_T::FreeDedicatedMemory(const VmaAllocation allocation)
14773{
14774 VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
14775
14776 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
14777 VmaPool parentPool = allocation->GetParentPool();
14778 if(parentPool == VK_NULL_HANDLE)
14779 {
14780 // Default pool
14781 m_DedicatedAllocations[memTypeIndex].Unregister(alloc: allocation);
14782 }
14783 else
14784 {
14785 // Custom pool
14786 parentPool->m_DedicatedAllocations.Unregister(alloc: allocation);
14787 }
14788
14789 VkDeviceMemory hMemory = allocation->GetMemory();
14790
14791 /*
14792 There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
14793 before vkFreeMemory.
14794
14795 if(allocation->GetMappedData() != VMA_NULL)
14796 {
14797 (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
14798 }
14799 */
14800
14801 FreeVulkanMemory(memoryType: memTypeIndex, size: allocation->GetSize(), hMemory);
14802
14803 m_Budget.RemoveAllocation(heapIndex: MemoryTypeIndexToHeapIndex(memTypeIndex: allocation->GetMemoryTypeIndex()), allocationSize: allocation->GetSize());
14804 allocation->Destroy(allocator: this);
14805 m_AllocationObjectAllocator.Free(hAlloc: allocation);
14806
14807 VMA_DEBUG_LOG_FORMAT(" Freed DedicatedMemory MemoryTypeIndex=%" PRIu32, memTypeIndex);
14808}
14809
14810uint32_t VmaAllocator_T::CalculateGpuDefragmentationMemoryTypeBits() const
14811{
14812 VkBufferCreateInfo dummyBufCreateInfo;
14813 VmaFillGpuDefragmentationBufferCreateInfo(outBufCreateInfo&: dummyBufCreateInfo);
14814
14815 uint32_t memoryTypeBits = 0;
14816
14817 // Create buffer.
14818 VkBuffer buf = VK_NULL_HANDLE;
14819 VkResult res = (*GetVulkanFunctions().vkCreateBuffer)(
14820 m_hDevice, &dummyBufCreateInfo, GetAllocationCallbacks(), &buf);
14821 if(res == VK_SUCCESS)
14822 {
14823 // Query for supported memory types.
14824 VkMemoryRequirements memReq;
14825 (*GetVulkanFunctions().vkGetBufferMemoryRequirements)(m_hDevice, buf, &memReq);
14826 memoryTypeBits = memReq.memoryTypeBits;
14827
14828 // Destroy buffer.
14829 (*GetVulkanFunctions().vkDestroyBuffer)(m_hDevice, buf, GetAllocationCallbacks());
14830 }
14831
14832 return memoryTypeBits;
14833}
14834
14835uint32_t VmaAllocator_T::CalculateGlobalMemoryTypeBits() const
14836{
14837 // Make sure memory information is already fetched.
14838 VMA_ASSERT(GetMemoryTypeCount() > 0);
14839
14840 uint32_t memoryTypeBits = UINT32_MAX;
14841
14842 if(!m_UseAmdDeviceCoherentMemory)
14843 {
14844 // Exclude memory types that have VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD.
14845 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
14846 {
14847 if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
14848 {
14849 memoryTypeBits &= ~(1u << memTypeIndex);
14850 }
14851 }
14852 }
14853
14854 return memoryTypeBits;
14855}
14856
14857bool VmaAllocator_T::GetFlushOrInvalidateRange(
14858 VmaAllocation allocation,
14859 VkDeviceSize offset, VkDeviceSize size,
14860 VkMappedMemoryRange& outRange) const
14861{
14862 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
14863 if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
14864 {
14865 const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
14866 const VkDeviceSize allocationSize = allocation->GetSize();
14867 VMA_ASSERT(offset <= allocationSize);
14868
14869 outRange.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE;
14870 outRange.pNext = VMA_NULL;
14871 outRange.memory = allocation->GetMemory();
14872
14873 switch(allocation->GetType())
14874 {
14875 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
14876 outRange.offset = VmaAlignDown(val: offset, alignment: nonCoherentAtomSize);
14877 if(size == VK_WHOLE_SIZE)
14878 {
14879 outRange.size = allocationSize - outRange.offset;
14880 }
14881 else
14882 {
14883 VMA_ASSERT(offset + size <= allocationSize);
14884 outRange.size = VMA_MIN(
14885 VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize),
14886 allocationSize - outRange.offset);
14887 }
14888 break;
14889 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
14890 {
14891 // 1. Still within this allocation.
14892 outRange.offset = VmaAlignDown(val: offset, alignment: nonCoherentAtomSize);
14893 if(size == VK_WHOLE_SIZE)
14894 {
14895 size = allocationSize - offset;
14896 }
14897 else
14898 {
14899 VMA_ASSERT(offset + size <= allocationSize);
14900 }
14901 outRange.size = VmaAlignUp(val: size + (offset - outRange.offset), alignment: nonCoherentAtomSize);
14902
14903 // 2. Adjust to whole block.
14904 const VkDeviceSize allocationOffset = allocation->GetOffset();
14905 VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
14906 const VkDeviceSize blockSize = allocation->GetBlock()->m_pMetadata->GetSize();
14907 outRange.offset += allocationOffset;
14908 outRange.size = VMA_MIN(outRange.size, blockSize - outRange.offset);
14909
14910 break;
14911 }
14912 default:
14913 VMA_ASSERT(0);
14914 }
14915 return true;
14916 }
14917 return false;
14918}
14919
14920#if VMA_MEMORY_BUDGET
14921void VmaAllocator_T::UpdateVulkanBudget()
14922{
14923 VMA_ASSERT(m_UseExtMemoryBudget);
14924
14925 VkPhysicalDeviceMemoryProperties2KHR memProps = { .sType: VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_PROPERTIES_2_KHR };
14926
14927 VkPhysicalDeviceMemoryBudgetPropertiesEXT budgetProps = { .sType: VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_BUDGET_PROPERTIES_EXT };
14928 VmaPnextChainPushFront(mainStruct: &memProps, newStruct: &budgetProps);
14929
14930 GetVulkanFunctions().vkGetPhysicalDeviceMemoryProperties2KHR(m_PhysicalDevice, &memProps);
14931
14932 {
14933 VmaMutexLockWrite lockWrite(m_Budget.m_BudgetMutex, m_UseMutex);
14934
14935 for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
14936 {
14937 m_Budget.m_VulkanUsage[heapIndex] = budgetProps.heapUsage[heapIndex];
14938 m_Budget.m_VulkanBudget[heapIndex] = budgetProps.heapBudget[heapIndex];
14939 m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] = m_Budget.m_BlockBytes[heapIndex].load();
14940
14941 // Some bugged drivers return the budget incorrectly, e.g. 0 or much bigger than heap size.
14942 if(m_Budget.m_VulkanBudget[heapIndex] == 0)
14943 {
14944 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
14945 }
14946 else if(m_Budget.m_VulkanBudget[heapIndex] > m_MemProps.memoryHeaps[heapIndex].size)
14947 {
14948 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size;
14949 }
14950 if(m_Budget.m_VulkanUsage[heapIndex] == 0 && m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] > 0)
14951 {
14952 m_Budget.m_VulkanUsage[heapIndex] = m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
14953 }
14954 }
14955 m_Budget.m_OperationsSinceBudgetFetch = 0;
14956 }
14957}
14958#endif // VMA_MEMORY_BUDGET
14959
14960void VmaAllocator_T::FillAllocation(const VmaAllocation hAllocation, uint8_t pattern)
14961{
14962 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS &&
14963 hAllocation->IsMappingAllowed() &&
14964 (m_MemProps.memoryTypes[hAllocation->GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
14965 {
14966 void* pData = VMA_NULL;
14967 VkResult res = Map(hAllocation, ppData: &pData);
14968 if(res == VK_SUCCESS)
14969 {
14970 memset(s: pData, c: (int)pattern, n: (size_t)hAllocation->GetSize());
14971 FlushOrInvalidateAllocation(hAllocation, offset: 0, VK_WHOLE_SIZE, op: VMA_CACHE_FLUSH);
14972 Unmap(hAllocation);
14973 }
14974 else
14975 {
14976 VMA_ASSERT(0 && "VMA_DEBUG_INITIALIZE_ALLOCATIONS is enabled, but couldn't map memory to fill allocation.");
14977 }
14978 }
14979}
14980
14981uint32_t VmaAllocator_T::GetGpuDefragmentationMemoryTypeBits()
14982{
14983 uint32_t memoryTypeBits = m_GpuDefragmentationMemoryTypeBits.load();
14984 if(memoryTypeBits == UINT32_MAX)
14985 {
14986 memoryTypeBits = CalculateGpuDefragmentationMemoryTypeBits();
14987 m_GpuDefragmentationMemoryTypeBits.store(i: memoryTypeBits);
14988 }
14989 return memoryTypeBits;
14990}
14991
14992#if VMA_STATS_STRING_ENABLED
14993void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
14994{
14995 json.WriteString(pStr: "DefaultPools");
14996 json.BeginObject();
14997 {
14998 for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
14999 {
15000 VmaBlockVector* pBlockVector = m_pBlockVectors[memTypeIndex];
15001 VmaDedicatedAllocationList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
15002 if (pBlockVector != VMA_NULL)
15003 {
15004 json.BeginString(pStr: "Type ");
15005 json.ContinueString(n: memTypeIndex);
15006 json.EndString();
15007 json.BeginObject();
15008 {
15009 json.WriteString(pStr: "PreferredBlockSize");
15010 json.WriteNumber(n: pBlockVector->GetPreferredBlockSize());
15011
15012 json.WriteString(pStr: "Blocks");
15013 pBlockVector->PrintDetailedMap(json);
15014
15015 json.WriteString(pStr: "DedicatedAllocations");
15016 dedicatedAllocList.BuildStatsString(json);
15017 }
15018 json.EndObject();
15019 }
15020 }
15021 }
15022 json.EndObject();
15023
15024 json.WriteString(pStr: "CustomPools");
15025 json.BeginObject();
15026 {
15027 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
15028 if (!m_Pools.IsEmpty())
15029 {
15030 for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15031 {
15032 bool displayType = true;
15033 size_t index = 0;
15034 for (VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(item: pool))
15035 {
15036 VmaBlockVector& blockVector = pool->m_BlockVector;
15037 if (blockVector.GetMemoryTypeIndex() == memTypeIndex)
15038 {
15039 if (displayType)
15040 {
15041 json.BeginString(pStr: "Type ");
15042 json.ContinueString(n: memTypeIndex);
15043 json.EndString();
15044 json.BeginArray();
15045 displayType = false;
15046 }
15047
15048 json.BeginObject();
15049 {
15050 json.WriteString(pStr: "Name");
15051 json.BeginString();
15052 json.ContinueString(n: (uint64_t)index++);
15053 if (pool->GetName())
15054 {
15055 json.ContinueString(pStr: " - ");
15056 json.ContinueString(pStr: pool->GetName());
15057 }
15058 json.EndString();
15059
15060 json.WriteString(pStr: "PreferredBlockSize");
15061 json.WriteNumber(n: blockVector.GetPreferredBlockSize());
15062
15063 json.WriteString(pStr: "Blocks");
15064 blockVector.PrintDetailedMap(json);
15065
15066 json.WriteString(pStr: "DedicatedAllocations");
15067 pool->m_DedicatedAllocations.BuildStatsString(json);
15068 }
15069 json.EndObject();
15070 }
15071 }
15072
15073 if (!displayType)
15074 json.EndArray();
15075 }
15076 }
15077 }
15078 json.EndObject();
15079}
15080#endif // VMA_STATS_STRING_ENABLED
15081#endif // _VMA_ALLOCATOR_T_FUNCTIONS
15082
15083
15084#ifndef _VMA_PUBLIC_INTERFACE
15085VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
15086 const VmaAllocatorCreateInfo* pCreateInfo,
15087 VmaAllocator* pAllocator)
15088{
15089 VMA_ASSERT(pCreateInfo && pAllocator);
15090 VMA_ASSERT(pCreateInfo->vulkanApiVersion == 0 ||
15091 (VK_VERSION_MAJOR(pCreateInfo->vulkanApiVersion) == 1 && VK_VERSION_MINOR(pCreateInfo->vulkanApiVersion) <= 4));
15092 VMA_DEBUG_LOG("vmaCreateAllocator");
15093 *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
15094 VkResult result = (*pAllocator)->Init(pCreateInfo);
15095 if(result < 0)
15096 {
15097 vma_delete(pAllocationCallbacks: pCreateInfo->pAllocationCallbacks, ptr: *pAllocator);
15098 *pAllocator = VK_NULL_HANDLE;
15099 }
15100 return result;
15101}
15102
15103VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
15104 VmaAllocator allocator)
15105{
15106 if(allocator != VK_NULL_HANDLE)
15107 {
15108 VMA_DEBUG_LOG("vmaDestroyAllocator");
15109 VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
15110 vma_delete(pAllocationCallbacks: &allocationCallbacks, ptr: allocator);
15111 }
15112}
15113
15114VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(VmaAllocator allocator, VmaAllocatorInfo* pAllocatorInfo)
15115{
15116 VMA_ASSERT(allocator && pAllocatorInfo);
15117 pAllocatorInfo->instance = allocator->m_hInstance;
15118 pAllocatorInfo->physicalDevice = allocator->GetPhysicalDevice();
15119 pAllocatorInfo->device = allocator->m_hDevice;
15120}
15121
15122VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
15123 VmaAllocator allocator,
15124 const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
15125{
15126 VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
15127 *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
15128}
15129
15130VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
15131 VmaAllocator allocator,
15132 const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
15133{
15134 VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
15135 *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
15136}
15137
15138VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
15139 VmaAllocator allocator,
15140 uint32_t memoryTypeIndex,
15141 VkMemoryPropertyFlags* pFlags)
15142{
15143 VMA_ASSERT(allocator && pFlags);
15144 VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
15145 *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
15146}
15147
15148VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
15149 VmaAllocator allocator,
15150 uint32_t frameIndex)
15151{
15152 VMA_ASSERT(allocator);
15153
15154 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15155
15156 allocator->SetCurrentFrameIndex(frameIndex);
15157}
15158
15159VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
15160 VmaAllocator allocator,
15161 VmaTotalStatistics* pStats)
15162{
15163 VMA_ASSERT(allocator && pStats);
15164 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15165 allocator->CalculateStatistics(pStats);
15166}
15167
15168VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
15169 VmaAllocator allocator,
15170 VmaBudget* pBudgets)
15171{
15172 VMA_ASSERT(allocator && pBudgets);
15173 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15174 allocator->GetHeapBudgets(outBudgets: pBudgets, firstHeap: 0, heapCount: allocator->GetMemoryHeapCount());
15175}
15176
15177#if VMA_STATS_STRING_ENABLED
15178
15179VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
15180 VmaAllocator allocator,
15181 char** ppStatsString,
15182 VkBool32 detailedMap)
15183{
15184 VMA_ASSERT(allocator && ppStatsString);
15185 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15186
15187 VmaStringBuilder sb(allocator->GetAllocationCallbacks());
15188 {
15189 VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
15190 allocator->GetHeapBudgets(outBudgets: budgets, firstHeap: 0, heapCount: allocator->GetMemoryHeapCount());
15191
15192 VmaTotalStatistics stats;
15193 allocator->CalculateStatistics(pStats: &stats);
15194
15195 VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
15196 json.BeginObject();
15197 {
15198 json.WriteString(pStr: "General");
15199 json.BeginObject();
15200 {
15201 const VkPhysicalDeviceProperties& deviceProperties = allocator->m_PhysicalDeviceProperties;
15202 const VkPhysicalDeviceMemoryProperties& memoryProperties = allocator->m_MemProps;
15203
15204 json.WriteString(pStr: "API");
15205 json.WriteString(pStr: "Vulkan");
15206
15207 json.WriteString(pStr: "apiVersion");
15208 json.BeginString();
15209 json.ContinueString(VK_VERSION_MAJOR(deviceProperties.apiVersion));
15210 json.ContinueString(pStr: ".");
15211 json.ContinueString(VK_VERSION_MINOR(deviceProperties.apiVersion));
15212 json.ContinueString(pStr: ".");
15213 json.ContinueString(VK_VERSION_PATCH(deviceProperties.apiVersion));
15214 json.EndString();
15215
15216 json.WriteString(pStr: "GPU");
15217 json.WriteString(pStr: deviceProperties.deviceName);
15218 json.WriteString(pStr: "deviceType");
15219 json.WriteNumber(n: static_cast<uint32_t>(deviceProperties.deviceType));
15220
15221 json.WriteString(pStr: "maxMemoryAllocationCount");
15222 json.WriteNumber(n: deviceProperties.limits.maxMemoryAllocationCount);
15223 json.WriteString(pStr: "bufferImageGranularity");
15224 json.WriteNumber(n: deviceProperties.limits.bufferImageGranularity);
15225 json.WriteString(pStr: "nonCoherentAtomSize");
15226 json.WriteNumber(n: deviceProperties.limits.nonCoherentAtomSize);
15227
15228 json.WriteString(pStr: "memoryHeapCount");
15229 json.WriteNumber(n: memoryProperties.memoryHeapCount);
15230 json.WriteString(pStr: "memoryTypeCount");
15231 json.WriteNumber(n: memoryProperties.memoryTypeCount);
15232 }
15233 json.EndObject();
15234 }
15235 {
15236 json.WriteString(pStr: "Total");
15237 VmaPrintDetailedStatistics(json, stat: stats.total);
15238 }
15239 {
15240 json.WriteString(pStr: "MemoryInfo");
15241 json.BeginObject();
15242 {
15243 for (uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
15244 {
15245 json.BeginString(pStr: "Heap ");
15246 json.ContinueString(n: heapIndex);
15247 json.EndString();
15248 json.BeginObject();
15249 {
15250 const VkMemoryHeap& heapInfo = allocator->m_MemProps.memoryHeaps[heapIndex];
15251 json.WriteString(pStr: "Flags");
15252 json.BeginArray(singleLine: true);
15253 {
15254 if (heapInfo.flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT)
15255 json.WriteString(pStr: "DEVICE_LOCAL");
15256 #if VMA_VULKAN_VERSION >= 1001000
15257 if (heapInfo.flags & VK_MEMORY_HEAP_MULTI_INSTANCE_BIT)
15258 json.WriteString(pStr: "MULTI_INSTANCE");
15259 #endif
15260
15261 VkMemoryHeapFlags flags = heapInfo.flags &
15262 ~(VK_MEMORY_HEAP_DEVICE_LOCAL_BIT
15263 #if VMA_VULKAN_VERSION >= 1001000
15264 | VK_MEMORY_HEAP_MULTI_INSTANCE_BIT
15265 #endif
15266 );
15267 if (flags != 0)
15268 json.WriteNumber(n: flags);
15269 }
15270 json.EndArray();
15271
15272 json.WriteString(pStr: "Size");
15273 json.WriteNumber(n: heapInfo.size);
15274
15275 json.WriteString(pStr: "Budget");
15276 json.BeginObject();
15277 {
15278 json.WriteString(pStr: "BudgetBytes");
15279 json.WriteNumber(n: budgets[heapIndex].budget);
15280 json.WriteString(pStr: "UsageBytes");
15281 json.WriteNumber(n: budgets[heapIndex].usage);
15282 }
15283 json.EndObject();
15284
15285 json.WriteString(pStr: "Stats");
15286 VmaPrintDetailedStatistics(json, stat: stats.memoryHeap[heapIndex]);
15287
15288 json.WriteString(pStr: "MemoryPools");
15289 json.BeginObject();
15290 {
15291 for (uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
15292 {
15293 if (allocator->MemoryTypeIndexToHeapIndex(memTypeIndex: typeIndex) == heapIndex)
15294 {
15295 json.BeginString(pStr: "Type ");
15296 json.ContinueString(n: typeIndex);
15297 json.EndString();
15298 json.BeginObject();
15299 {
15300 json.WriteString(pStr: "Flags");
15301 json.BeginArray(singleLine: true);
15302 {
15303 VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
15304 if (flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT)
15305 json.WriteString(pStr: "DEVICE_LOCAL");
15306 if (flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
15307 json.WriteString(pStr: "HOST_VISIBLE");
15308 if (flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)
15309 json.WriteString(pStr: "HOST_COHERENT");
15310 if (flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT)
15311 json.WriteString(pStr: "HOST_CACHED");
15312 if (flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT)
15313 json.WriteString(pStr: "LAZILY_ALLOCATED");
15314 #if VMA_VULKAN_VERSION >= 1001000
15315 if (flags & VK_MEMORY_PROPERTY_PROTECTED_BIT)
15316 json.WriteString(pStr: "PROTECTED");
15317 #endif
15318 #if VK_AMD_device_coherent_memory
15319 if (flags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY)
15320 json.WriteString(pStr: "DEVICE_COHERENT_AMD");
15321 if (flags & VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)
15322 json.WriteString(pStr: "DEVICE_UNCACHED_AMD");
15323 #endif
15324
15325 flags &= ~(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT
15326 #if VMA_VULKAN_VERSION >= 1001000
15327 | VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT
15328 #endif
15329 #if VK_AMD_device_coherent_memory
15330 | VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY
15331 | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY
15332 #endif
15333 | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT
15334 | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
15335 | VK_MEMORY_PROPERTY_HOST_CACHED_BIT);
15336 if (flags != 0)
15337 json.WriteNumber(n: flags);
15338 }
15339 json.EndArray();
15340
15341 json.WriteString(pStr: "Stats");
15342 VmaPrintDetailedStatistics(json, stat: stats.memoryType[typeIndex]);
15343 }
15344 json.EndObject();
15345 }
15346 }
15347
15348 }
15349 json.EndObject();
15350 }
15351 json.EndObject();
15352 }
15353 }
15354 json.EndObject();
15355 }
15356
15357 if (detailedMap == VK_TRUE)
15358 allocator->PrintDetailedMap(json);
15359
15360 json.EndObject();
15361 }
15362
15363 *ppStatsString = VmaCreateStringCopy(allocs: allocator->GetAllocationCallbacks(), srcStr: sb.GetData(), strLen: sb.GetLength());
15364}
15365
15366VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
15367 VmaAllocator allocator,
15368 char* pStatsString)
15369{
15370 if(pStatsString != VMA_NULL)
15371 {
15372 VMA_ASSERT(allocator);
15373 VmaFreeString(allocs: allocator->GetAllocationCallbacks(), str: pStatsString);
15374 }
15375}
15376
15377#endif // VMA_STATS_STRING_ENABLED
15378
15379/*
15380This function is not protected by any mutex because it just reads immutable data.
15381*/
15382VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
15383 VmaAllocator allocator,
15384 uint32_t memoryTypeBits,
15385 const VmaAllocationCreateInfo* pAllocationCreateInfo,
15386 uint32_t* pMemoryTypeIndex)
15387{
15388 VMA_ASSERT(allocator != VK_NULL_HANDLE);
15389 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
15390 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
15391
15392 return allocator->FindMemoryTypeIndex(memoryTypeBits, pAllocationCreateInfo, bufImgUsage: VmaBufferImageUsage::UNKNOWN, pMemoryTypeIndex);
15393}
15394
15395VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
15396 VmaAllocator allocator,
15397 const VkBufferCreateInfo* pBufferCreateInfo,
15398 const VmaAllocationCreateInfo* pAllocationCreateInfo,
15399 uint32_t* pMemoryTypeIndex)
15400{
15401 VMA_ASSERT(allocator != VK_NULL_HANDLE);
15402 VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
15403 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
15404 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
15405
15406 const VkDevice hDev = allocator->m_hDevice;
15407 const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
15408 VkResult res;
15409
15410#if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
15411 if(funcs->vkGetDeviceBufferMemoryRequirements)
15412 {
15413 // Can query straight from VkBufferCreateInfo :)
15414 VkDeviceBufferMemoryRequirementsKHR devBufMemReq = {.sType: VK_STRUCTURE_TYPE_DEVICE_BUFFER_MEMORY_REQUIREMENTS_KHR};
15415 devBufMemReq.pCreateInfo = pBufferCreateInfo;
15416
15417 VkMemoryRequirements2 memReq = {.sType: VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
15418 (*funcs->vkGetDeviceBufferMemoryRequirements)(hDev, &devBufMemReq, &memReq);
15419
15420 res = allocator->FindMemoryTypeIndex(
15421 memoryTypeBits: memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo,
15422 bufImgUsage: VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), pMemoryTypeIndex);
15423 }
15424 else
15425#endif // VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
15426 {
15427 // Must create a dummy buffer to query :(
15428 VkBuffer hBuffer = VK_NULL_HANDLE;
15429 res = funcs->vkCreateBuffer(
15430 hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
15431 if(res == VK_SUCCESS)
15432 {
15433 VkMemoryRequirements memReq = {};
15434 funcs->vkGetBufferMemoryRequirements(hDev, hBuffer, &memReq);
15435
15436 res = allocator->FindMemoryTypeIndex(
15437 memoryTypeBits: memReq.memoryTypeBits, pAllocationCreateInfo,
15438 bufImgUsage: VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), pMemoryTypeIndex);
15439
15440 funcs->vkDestroyBuffer(
15441 hDev, hBuffer, allocator->GetAllocationCallbacks());
15442 }
15443 }
15444 return res;
15445}
15446
15447VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
15448 VmaAllocator allocator,
15449 const VkImageCreateInfo* pImageCreateInfo,
15450 const VmaAllocationCreateInfo* pAllocationCreateInfo,
15451 uint32_t* pMemoryTypeIndex)
15452{
15453 VMA_ASSERT(allocator != VK_NULL_HANDLE);
15454 VMA_ASSERT(pImageCreateInfo != VMA_NULL);
15455 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
15456 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
15457
15458 const VkDevice hDev = allocator->m_hDevice;
15459 const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
15460 VkResult res;
15461
15462#if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
15463 if(funcs->vkGetDeviceImageMemoryRequirements)
15464 {
15465 // Can query straight from VkImageCreateInfo :)
15466 VkDeviceImageMemoryRequirementsKHR devImgMemReq = {.sType: VK_STRUCTURE_TYPE_DEVICE_IMAGE_MEMORY_REQUIREMENTS_KHR};
15467 devImgMemReq.pCreateInfo = pImageCreateInfo;
15468 VMA_ASSERT(pImageCreateInfo->tiling != VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY && (pImageCreateInfo->flags & VK_IMAGE_CREATE_DISJOINT_BIT_COPY) == 0 &&
15469 "Cannot use this VkImageCreateInfo with vmaFindMemoryTypeIndexForImageInfo as I don't know what to pass as VkDeviceImageMemoryRequirements::planeAspect.");
15470
15471 VkMemoryRequirements2 memReq = {.sType: VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
15472 (*funcs->vkGetDeviceImageMemoryRequirements)(hDev, &devImgMemReq, &memReq);
15473
15474 res = allocator->FindMemoryTypeIndex(
15475 memoryTypeBits: memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo,
15476 bufImgUsage: VmaBufferImageUsage(*pImageCreateInfo), pMemoryTypeIndex);
15477 }
15478 else
15479#endif // VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
15480 {
15481 // Must create a dummy image to query :(
15482 VkImage hImage = VK_NULL_HANDLE;
15483 res = funcs->vkCreateImage(
15484 hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
15485 if(res == VK_SUCCESS)
15486 {
15487 VkMemoryRequirements memReq = {};
15488 funcs->vkGetImageMemoryRequirements(hDev, hImage, &memReq);
15489
15490 res = allocator->FindMemoryTypeIndex(
15491 memoryTypeBits: memReq.memoryTypeBits, pAllocationCreateInfo,
15492 bufImgUsage: VmaBufferImageUsage(*pImageCreateInfo), pMemoryTypeIndex);
15493
15494 funcs->vkDestroyImage(
15495 hDev, hImage, allocator->GetAllocationCallbacks());
15496 }
15497 }
15498 return res;
15499}
15500
15501VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
15502 VmaAllocator allocator,
15503 const VmaPoolCreateInfo* pCreateInfo,
15504 VmaPool* pPool)
15505{
15506 VMA_ASSERT(allocator && pCreateInfo && pPool);
15507
15508 VMA_DEBUG_LOG("vmaCreatePool");
15509
15510 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15511
15512 return allocator->CreatePool(pCreateInfo, pPool);
15513}
15514
15515VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
15516 VmaAllocator allocator,
15517 VmaPool pool)
15518{
15519 VMA_ASSERT(allocator);
15520
15521 if(pool == VK_NULL_HANDLE)
15522 {
15523 return;
15524 }
15525
15526 VMA_DEBUG_LOG("vmaDestroyPool");
15527
15528 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15529
15530 allocator->DestroyPool(pool);
15531}
15532
15533VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
15534 VmaAllocator allocator,
15535 VmaPool pool,
15536 VmaStatistics* pPoolStats)
15537{
15538 VMA_ASSERT(allocator && pool && pPoolStats);
15539
15540 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15541
15542 allocator->GetPoolStatistics(pool, pPoolStats);
15543}
15544
15545VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
15546 VmaAllocator allocator,
15547 VmaPool pool,
15548 VmaDetailedStatistics* pPoolStats)
15549{
15550 VMA_ASSERT(allocator && pool && pPoolStats);
15551
15552 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15553
15554 allocator->CalculatePoolStatistics(pool, pPoolStats);
15555}
15556
15557VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
15558{
15559 VMA_ASSERT(allocator && pool);
15560
15561 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15562
15563 VMA_DEBUG_LOG("vmaCheckPoolCorruption");
15564
15565 return allocator->CheckPoolCorruption(hPool: pool);
15566}
15567
15568VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
15569 VmaAllocator allocator,
15570 VmaPool pool,
15571 const char** ppName)
15572{
15573 VMA_ASSERT(allocator && pool && ppName);
15574
15575 VMA_DEBUG_LOG("vmaGetPoolName");
15576
15577 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15578
15579 *ppName = pool->GetName();
15580}
15581
15582VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
15583 VmaAllocator allocator,
15584 VmaPool pool,
15585 const char* pName)
15586{
15587 VMA_ASSERT(allocator && pool);
15588
15589 VMA_DEBUG_LOG("vmaSetPoolName");
15590
15591 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15592
15593 pool->SetName(pName);
15594}
15595
15596VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
15597 VmaAllocator allocator,
15598 const VkMemoryRequirements* pVkMemoryRequirements,
15599 const VmaAllocationCreateInfo* pCreateInfo,
15600 VmaAllocation* pAllocation,
15601 VmaAllocationInfo* pAllocationInfo)
15602{
15603 VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
15604
15605 VMA_DEBUG_LOG("vmaAllocateMemory");
15606
15607 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15608
15609 VkResult result = allocator->AllocateMemory(
15610 vkMemReq: *pVkMemoryRequirements,
15611 requiresDedicatedAllocation: false, // requiresDedicatedAllocation
15612 prefersDedicatedAllocation: false, // prefersDedicatedAllocation
15613 VK_NULL_HANDLE, // dedicatedBuffer
15614 VK_NULL_HANDLE, // dedicatedImage
15615 dedicatedBufferImageUsage: VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
15616 createInfo: *pCreateInfo,
15617 suballocType: VMA_SUBALLOCATION_TYPE_UNKNOWN,
15618 allocationCount: 1, // allocationCount
15619 pAllocations: pAllocation);
15620
15621 if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
15622 {
15623 allocator->GetAllocationInfo(hAllocation: *pAllocation, pAllocationInfo);
15624 }
15625
15626 return result;
15627}
15628
15629VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
15630 VmaAllocator allocator,
15631 const VkMemoryRequirements* pVkMemoryRequirements,
15632 const VmaAllocationCreateInfo* pCreateInfo,
15633 size_t allocationCount,
15634 VmaAllocation* pAllocations,
15635 VmaAllocationInfo* pAllocationInfo)
15636{
15637 if(allocationCount == 0)
15638 {
15639 return VK_SUCCESS;
15640 }
15641
15642 VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocations);
15643
15644 VMA_DEBUG_LOG("vmaAllocateMemoryPages");
15645
15646 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15647
15648 VkResult result = allocator->AllocateMemory(
15649 vkMemReq: *pVkMemoryRequirements,
15650 requiresDedicatedAllocation: false, // requiresDedicatedAllocation
15651 prefersDedicatedAllocation: false, // prefersDedicatedAllocation
15652 VK_NULL_HANDLE, // dedicatedBuffer
15653 VK_NULL_HANDLE, // dedicatedImage
15654 dedicatedBufferImageUsage: VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
15655 createInfo: *pCreateInfo,
15656 suballocType: VMA_SUBALLOCATION_TYPE_UNKNOWN,
15657 allocationCount,
15658 pAllocations);
15659
15660 if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
15661 {
15662 for(size_t i = 0; i < allocationCount; ++i)
15663 {
15664 allocator->GetAllocationInfo(hAllocation: pAllocations[i], pAllocationInfo: pAllocationInfo + i);
15665 }
15666 }
15667
15668 return result;
15669}
15670
15671VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
15672 VmaAllocator allocator,
15673 VkBuffer buffer,
15674 const VmaAllocationCreateInfo* pCreateInfo,
15675 VmaAllocation* pAllocation,
15676 VmaAllocationInfo* pAllocationInfo)
15677{
15678 VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
15679
15680 VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
15681
15682 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15683
15684 VkMemoryRequirements vkMemReq = {};
15685 bool requiresDedicatedAllocation = false;
15686 bool prefersDedicatedAllocation = false;
15687 allocator->GetBufferMemoryRequirements(hBuffer: buffer, memReq&: vkMemReq,
15688 requiresDedicatedAllocation,
15689 prefersDedicatedAllocation);
15690
15691 VkResult result = allocator->AllocateMemory(
15692 vkMemReq,
15693 requiresDedicatedAllocation,
15694 prefersDedicatedAllocation,
15695 dedicatedBuffer: buffer, // dedicatedBuffer
15696 VK_NULL_HANDLE, // dedicatedImage
15697 dedicatedBufferImageUsage: VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
15698 createInfo: *pCreateInfo,
15699 suballocType: VMA_SUBALLOCATION_TYPE_BUFFER,
15700 allocationCount: 1, // allocationCount
15701 pAllocations: pAllocation);
15702
15703 if(pAllocationInfo && result == VK_SUCCESS)
15704 {
15705 allocator->GetAllocationInfo(hAllocation: *pAllocation, pAllocationInfo);
15706 }
15707
15708 return result;
15709}
15710
15711VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
15712 VmaAllocator allocator,
15713 VkImage image,
15714 const VmaAllocationCreateInfo* pCreateInfo,
15715 VmaAllocation* pAllocation,
15716 VmaAllocationInfo* pAllocationInfo)
15717{
15718 VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
15719
15720 VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
15721
15722 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15723
15724 VkMemoryRequirements vkMemReq = {};
15725 bool requiresDedicatedAllocation = false;
15726 bool prefersDedicatedAllocation = false;
15727 allocator->GetImageMemoryRequirements(hImage: image, memReq&: vkMemReq,
15728 requiresDedicatedAllocation, prefersDedicatedAllocation);
15729
15730 VkResult result = allocator->AllocateMemory(
15731 vkMemReq,
15732 requiresDedicatedAllocation,
15733 prefersDedicatedAllocation,
15734 VK_NULL_HANDLE, // dedicatedBuffer
15735 dedicatedImage: image, // dedicatedImage
15736 dedicatedBufferImageUsage: VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
15737 createInfo: *pCreateInfo,
15738 suballocType: VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
15739 allocationCount: 1, // allocationCount
15740 pAllocations: pAllocation);
15741
15742 if(pAllocationInfo && result == VK_SUCCESS)
15743 {
15744 allocator->GetAllocationInfo(hAllocation: *pAllocation, pAllocationInfo);
15745 }
15746
15747 return result;
15748}
15749
15750VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
15751 VmaAllocator allocator,
15752 VmaAllocation allocation)
15753{
15754 VMA_ASSERT(allocator);
15755
15756 if(allocation == VK_NULL_HANDLE)
15757 {
15758 return;
15759 }
15760
15761 VMA_DEBUG_LOG("vmaFreeMemory");
15762
15763 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15764
15765 allocator->FreeMemory(
15766 allocationCount: 1, // allocationCount
15767 pAllocations: &allocation);
15768}
15769
15770VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
15771 VmaAllocator allocator,
15772 size_t allocationCount,
15773 const VmaAllocation* pAllocations)
15774{
15775 if(allocationCount == 0)
15776 {
15777 return;
15778 }
15779
15780 VMA_ASSERT(allocator);
15781
15782 VMA_DEBUG_LOG("vmaFreeMemoryPages");
15783
15784 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15785
15786 allocator->FreeMemory(allocationCount, pAllocations);
15787}
15788
15789VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
15790 VmaAllocator allocator,
15791 VmaAllocation allocation,
15792 VmaAllocationInfo* pAllocationInfo)
15793{
15794 VMA_ASSERT(allocator && allocation && pAllocationInfo);
15795
15796 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15797
15798 allocator->GetAllocationInfo(hAllocation: allocation, pAllocationInfo);
15799}
15800
15801VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo2(
15802 VmaAllocator allocator,
15803 VmaAllocation allocation,
15804 VmaAllocationInfo2* pAllocationInfo)
15805{
15806 VMA_ASSERT(allocator && allocation && pAllocationInfo);
15807
15808 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15809
15810 allocator->GetAllocationInfo2(hAllocation: allocation, pAllocationInfo);
15811}
15812
15813VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
15814 VmaAllocator allocator,
15815 VmaAllocation allocation,
15816 void* pUserData)
15817{
15818 VMA_ASSERT(allocator && allocation);
15819
15820 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15821
15822 allocation->SetUserData(hAllocator: allocator, pUserData);
15823}
15824
15825VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
15826 VmaAllocator VMA_NOT_NULL allocator,
15827 VmaAllocation VMA_NOT_NULL allocation,
15828 const char* VMA_NULLABLE pName)
15829{
15830 allocation->SetName(hAllocator: allocator, pName);
15831}
15832
15833VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
15834 VmaAllocator VMA_NOT_NULL allocator,
15835 VmaAllocation VMA_NOT_NULL allocation,
15836 VkMemoryPropertyFlags* VMA_NOT_NULL pFlags)
15837{
15838 VMA_ASSERT(allocator && allocation && pFlags);
15839 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
15840 *pFlags = allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
15841}
15842
15843VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
15844 VmaAllocator allocator,
15845 VmaAllocation allocation,
15846 void** ppData)
15847{
15848 VMA_ASSERT(allocator && allocation && ppData);
15849
15850 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15851
15852 return allocator->Map(hAllocation: allocation, ppData);
15853}
15854
15855VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
15856 VmaAllocator allocator,
15857 VmaAllocation allocation)
15858{
15859 VMA_ASSERT(allocator && allocation);
15860
15861 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15862
15863 allocator->Unmap(hAllocation: allocation);
15864}
15865
15866VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
15867 VmaAllocator allocator,
15868 VmaAllocation allocation,
15869 VkDeviceSize offset,
15870 VkDeviceSize size)
15871{
15872 VMA_ASSERT(allocator && allocation);
15873
15874 VMA_DEBUG_LOG("vmaFlushAllocation");
15875
15876 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15877
15878 return allocator->FlushOrInvalidateAllocation(hAllocation: allocation, offset, size, op: VMA_CACHE_FLUSH);
15879}
15880
15881VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
15882 VmaAllocator allocator,
15883 VmaAllocation allocation,
15884 VkDeviceSize offset,
15885 VkDeviceSize size)
15886{
15887 VMA_ASSERT(allocator && allocation);
15888
15889 VMA_DEBUG_LOG("vmaInvalidateAllocation");
15890
15891 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15892
15893 return allocator->FlushOrInvalidateAllocation(hAllocation: allocation, offset, size, op: VMA_CACHE_INVALIDATE);
15894}
15895
15896VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
15897 VmaAllocator allocator,
15898 uint32_t allocationCount,
15899 const VmaAllocation* allocations,
15900 const VkDeviceSize* offsets,
15901 const VkDeviceSize* sizes)
15902{
15903 VMA_ASSERT(allocator);
15904
15905 if(allocationCount == 0)
15906 {
15907 return VK_SUCCESS;
15908 }
15909
15910 VMA_ASSERT(allocations);
15911
15912 VMA_DEBUG_LOG("vmaFlushAllocations");
15913
15914 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15915
15916 return allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, op: VMA_CACHE_FLUSH);
15917}
15918
15919VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
15920 VmaAllocator allocator,
15921 uint32_t allocationCount,
15922 const VmaAllocation* allocations,
15923 const VkDeviceSize* offsets,
15924 const VkDeviceSize* sizes)
15925{
15926 VMA_ASSERT(allocator);
15927
15928 if(allocationCount == 0)
15929 {
15930 return VK_SUCCESS;
15931 }
15932
15933 VMA_ASSERT(allocations);
15934
15935 VMA_DEBUG_LOG("vmaInvalidateAllocations");
15936
15937 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15938
15939 return allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, op: VMA_CACHE_INVALIDATE);
15940}
15941
15942VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyMemoryToAllocation(
15943 VmaAllocator allocator,
15944 const void* pSrcHostPointer,
15945 VmaAllocation dstAllocation,
15946 VkDeviceSize dstAllocationLocalOffset,
15947 VkDeviceSize size)
15948{
15949 VMA_ASSERT(allocator && pSrcHostPointer && dstAllocation);
15950
15951 if(size == 0)
15952 {
15953 return VK_SUCCESS;
15954 }
15955
15956 VMA_DEBUG_LOG("vmaCopyMemoryToAllocation");
15957
15958 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15959
15960 return allocator->CopyMemoryToAllocation(pSrcHostPointer, dstAllocation, dstAllocationLocalOffset, size);
15961}
15962
15963VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyAllocationToMemory(
15964 VmaAllocator allocator,
15965 VmaAllocation srcAllocation,
15966 VkDeviceSize srcAllocationLocalOffset,
15967 void* pDstHostPointer,
15968 VkDeviceSize size)
15969{
15970 VMA_ASSERT(allocator && srcAllocation && pDstHostPointer);
15971
15972 if(size == 0)
15973 {
15974 return VK_SUCCESS;
15975 }
15976
15977 VMA_DEBUG_LOG("vmaCopyAllocationToMemory");
15978
15979 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15980
15981 return allocator->CopyAllocationToMemory(srcAllocation, srcAllocationLocalOffset, pDstHostPointer, size);
15982}
15983
15984VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
15985 VmaAllocator allocator,
15986 uint32_t memoryTypeBits)
15987{
15988 VMA_ASSERT(allocator);
15989
15990 VMA_DEBUG_LOG("vmaCheckCorruption");
15991
15992 VMA_DEBUG_GLOBAL_MUTEX_LOCK
15993
15994 return allocator->CheckCorruption(memoryTypeBits);
15995}
15996
15997VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
15998 VmaAllocator allocator,
15999 const VmaDefragmentationInfo* pInfo,
16000 VmaDefragmentationContext* pContext)
16001{
16002 VMA_ASSERT(allocator && pInfo && pContext);
16003
16004 VMA_DEBUG_LOG("vmaBeginDefragmentation");
16005
16006 if (pInfo->pool != VMA_NULL)
16007 {
16008 // Check if run on supported algorithms
16009 if (pInfo->pool->m_BlockVector.GetAlgorithm() & VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
16010 return VK_ERROR_FEATURE_NOT_PRESENT;
16011 }
16012
16013 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16014
16015 *pContext = vma_new(allocator, VmaDefragmentationContext_T)(allocator, *pInfo);
16016 return VK_SUCCESS;
16017}
16018
16019VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
16020 VmaAllocator allocator,
16021 VmaDefragmentationContext context,
16022 VmaDefragmentationStats* pStats)
16023{
16024 VMA_ASSERT(allocator && context);
16025
16026 VMA_DEBUG_LOG("vmaEndDefragmentation");
16027
16028 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16029
16030 if (pStats)
16031 context->GetStats(outStats&: *pStats);
16032 vma_delete(hAllocator: allocator, ptr: context);
16033}
16034
16035VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
16036 VmaAllocator VMA_NOT_NULL allocator,
16037 VmaDefragmentationContext VMA_NOT_NULL context,
16038 VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
16039{
16040 VMA_ASSERT(context && pPassInfo);
16041
16042 VMA_DEBUG_LOG("vmaBeginDefragmentationPass");
16043
16044 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16045
16046 return context->DefragmentPassBegin(moveInfo&: *pPassInfo);
16047}
16048
16049VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
16050 VmaAllocator VMA_NOT_NULL allocator,
16051 VmaDefragmentationContext VMA_NOT_NULL context,
16052 VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
16053{
16054 VMA_ASSERT(context && pPassInfo);
16055
16056 VMA_DEBUG_LOG("vmaEndDefragmentationPass");
16057
16058 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16059
16060 return context->DefragmentPassEnd(moveInfo&: *pPassInfo);
16061}
16062
16063VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
16064 VmaAllocator allocator,
16065 VmaAllocation allocation,
16066 VkBuffer buffer)
16067{
16068 VMA_ASSERT(allocator && allocation && buffer);
16069
16070 VMA_DEBUG_LOG("vmaBindBufferMemory");
16071
16072 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16073
16074 return allocator->BindBufferMemory(hAllocation: allocation, allocationLocalOffset: 0, hBuffer: buffer, VMA_NULL);
16075}
16076
16077VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
16078 VmaAllocator allocator,
16079 VmaAllocation allocation,
16080 VkDeviceSize allocationLocalOffset,
16081 VkBuffer buffer,
16082 const void* pNext)
16083{
16084 VMA_ASSERT(allocator && allocation && buffer);
16085
16086 VMA_DEBUG_LOG("vmaBindBufferMemory2");
16087
16088 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16089
16090 return allocator->BindBufferMemory(hAllocation: allocation, allocationLocalOffset, hBuffer: buffer, pNext);
16091}
16092
16093VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
16094 VmaAllocator allocator,
16095 VmaAllocation allocation,
16096 VkImage image)
16097{
16098 VMA_ASSERT(allocator && allocation && image);
16099
16100 VMA_DEBUG_LOG("vmaBindImageMemory");
16101
16102 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16103
16104 return allocator->BindImageMemory(hAllocation: allocation, allocationLocalOffset: 0, hImage: image, VMA_NULL);
16105}
16106
16107VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
16108 VmaAllocator allocator,
16109 VmaAllocation allocation,
16110 VkDeviceSize allocationLocalOffset,
16111 VkImage image,
16112 const void* pNext)
16113{
16114 VMA_ASSERT(allocator && allocation && image);
16115
16116 VMA_DEBUG_LOG("vmaBindImageMemory2");
16117
16118 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16119
16120 return allocator->BindImageMemory(hAllocation: allocation, allocationLocalOffset, hImage: image, pNext);
16121}
16122
16123VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
16124 VmaAllocator allocator,
16125 const VkBufferCreateInfo* pBufferCreateInfo,
16126 const VmaAllocationCreateInfo* pAllocationCreateInfo,
16127 VkBuffer* pBuffer,
16128 VmaAllocation* pAllocation,
16129 VmaAllocationInfo* pAllocationInfo)
16130{
16131 VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
16132
16133 if(pBufferCreateInfo->size == 0)
16134 {
16135 return VK_ERROR_INITIALIZATION_FAILED;
16136 }
16137 if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
16138 !allocator->m_UseKhrBufferDeviceAddress)
16139 {
16140 VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
16141 return VK_ERROR_INITIALIZATION_FAILED;
16142 }
16143
16144 VMA_DEBUG_LOG("vmaCreateBuffer");
16145
16146 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16147
16148 *pBuffer = VK_NULL_HANDLE;
16149 *pAllocation = VK_NULL_HANDLE;
16150
16151 // 1. Create VkBuffer.
16152 VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
16153 allocator->m_hDevice,
16154 pBufferCreateInfo,
16155 allocator->GetAllocationCallbacks(),
16156 pBuffer);
16157 if(res >= 0)
16158 {
16159 // 2. vkGetBufferMemoryRequirements.
16160 VkMemoryRequirements vkMemReq = {};
16161 bool requiresDedicatedAllocation = false;
16162 bool prefersDedicatedAllocation = false;
16163 allocator->GetBufferMemoryRequirements(hBuffer: *pBuffer, memReq&: vkMemReq,
16164 requiresDedicatedAllocation, prefersDedicatedAllocation);
16165
16166 // 3. Allocate memory using allocator.
16167 res = allocator->AllocateMemory(
16168 vkMemReq,
16169 requiresDedicatedAllocation,
16170 prefersDedicatedAllocation,
16171 dedicatedBuffer: *pBuffer, // dedicatedBuffer
16172 VK_NULL_HANDLE, // dedicatedImage
16173 dedicatedBufferImageUsage: VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), // dedicatedBufferImageUsage
16174 createInfo: *pAllocationCreateInfo,
16175 suballocType: VMA_SUBALLOCATION_TYPE_BUFFER,
16176 allocationCount: 1, // allocationCount
16177 pAllocations: pAllocation);
16178
16179 if(res >= 0)
16180 {
16181 // 3. Bind buffer with memory.
16182 if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
16183 {
16184 res = allocator->BindBufferMemory(hAllocation: *pAllocation, allocationLocalOffset: 0, hBuffer: *pBuffer, VMA_NULL);
16185 }
16186 if(res >= 0)
16187 {
16188 // All steps succeeded.
16189 #if VMA_STATS_STRING_ENABLED
16190 (*pAllocation)->InitBufferUsage(createInfo: *pBufferCreateInfo, useKhrMaintenance5: allocator->m_UseKhrMaintenance5);
16191 #endif
16192 if(pAllocationInfo != VMA_NULL)
16193 {
16194 allocator->GetAllocationInfo(hAllocation: *pAllocation, pAllocationInfo);
16195 }
16196
16197 return VK_SUCCESS;
16198 }
16199 allocator->FreeMemory(
16200 allocationCount: 1, // allocationCount
16201 pAllocations: pAllocation);
16202 *pAllocation = VK_NULL_HANDLE;
16203 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
16204 *pBuffer = VK_NULL_HANDLE;
16205 return res;
16206 }
16207 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
16208 *pBuffer = VK_NULL_HANDLE;
16209 return res;
16210 }
16211 return res;
16212}
16213
16214VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
16215 VmaAllocator allocator,
16216 const VkBufferCreateInfo* pBufferCreateInfo,
16217 const VmaAllocationCreateInfo* pAllocationCreateInfo,
16218 VkDeviceSize minAlignment,
16219 VkBuffer* pBuffer,
16220 VmaAllocation* pAllocation,
16221 VmaAllocationInfo* pAllocationInfo)
16222{
16223 VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && VmaIsPow2(minAlignment) && pBuffer && pAllocation);
16224
16225 if(pBufferCreateInfo->size == 0)
16226 {
16227 return VK_ERROR_INITIALIZATION_FAILED;
16228 }
16229 if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
16230 !allocator->m_UseKhrBufferDeviceAddress)
16231 {
16232 VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
16233 return VK_ERROR_INITIALIZATION_FAILED;
16234 }
16235
16236 VMA_DEBUG_LOG("vmaCreateBufferWithAlignment");
16237
16238 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16239
16240 *pBuffer = VK_NULL_HANDLE;
16241 *pAllocation = VK_NULL_HANDLE;
16242
16243 // 1. Create VkBuffer.
16244 VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
16245 allocator->m_hDevice,
16246 pBufferCreateInfo,
16247 allocator->GetAllocationCallbacks(),
16248 pBuffer);
16249 if(res >= 0)
16250 {
16251 // 2. vkGetBufferMemoryRequirements.
16252 VkMemoryRequirements vkMemReq = {};
16253 bool requiresDedicatedAllocation = false;
16254 bool prefersDedicatedAllocation = false;
16255 allocator->GetBufferMemoryRequirements(hBuffer: *pBuffer, memReq&: vkMemReq,
16256 requiresDedicatedAllocation, prefersDedicatedAllocation);
16257
16258 // 2a. Include minAlignment
16259 vkMemReq.alignment = VMA_MAX(vkMemReq.alignment, minAlignment);
16260
16261 // 3. Allocate memory using allocator.
16262 res = allocator->AllocateMemory(
16263 vkMemReq,
16264 requiresDedicatedAllocation,
16265 prefersDedicatedAllocation,
16266 dedicatedBuffer: *pBuffer, // dedicatedBuffer
16267 VK_NULL_HANDLE, // dedicatedImage
16268 dedicatedBufferImageUsage: VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), // dedicatedBufferImageUsage
16269 createInfo: *pAllocationCreateInfo,
16270 suballocType: VMA_SUBALLOCATION_TYPE_BUFFER,
16271 allocationCount: 1, // allocationCount
16272 pAllocations: pAllocation);
16273
16274 if(res >= 0)
16275 {
16276 // 3. Bind buffer with memory.
16277 if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
16278 {
16279 res = allocator->BindBufferMemory(hAllocation: *pAllocation, allocationLocalOffset: 0, hBuffer: *pBuffer, VMA_NULL);
16280 }
16281 if(res >= 0)
16282 {
16283 // All steps succeeded.
16284 #if VMA_STATS_STRING_ENABLED
16285 (*pAllocation)->InitBufferUsage(createInfo: *pBufferCreateInfo, useKhrMaintenance5: allocator->m_UseKhrMaintenance5);
16286 #endif
16287 if(pAllocationInfo != VMA_NULL)
16288 {
16289 allocator->GetAllocationInfo(hAllocation: *pAllocation, pAllocationInfo);
16290 }
16291
16292 return VK_SUCCESS;
16293 }
16294 allocator->FreeMemory(
16295 allocationCount: 1, // allocationCount
16296 pAllocations: pAllocation);
16297 *pAllocation = VK_NULL_HANDLE;
16298 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
16299 *pBuffer = VK_NULL_HANDLE;
16300 return res;
16301 }
16302 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
16303 *pBuffer = VK_NULL_HANDLE;
16304 return res;
16305 }
16306 return res;
16307}
16308
16309VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
16310 VmaAllocator VMA_NOT_NULL allocator,
16311 VmaAllocation VMA_NOT_NULL allocation,
16312 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
16313 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer)
16314{
16315 return vmaCreateAliasingBuffer2(allocator, allocation, allocationLocalOffset: 0, pBufferCreateInfo, pBuffer);
16316}
16317
16318VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer2(
16319 VmaAllocator VMA_NOT_NULL allocator,
16320 VmaAllocation VMA_NOT_NULL allocation,
16321 VkDeviceSize allocationLocalOffset,
16322 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
16323 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer)
16324{
16325 VMA_ASSERT(allocator && pBufferCreateInfo && pBuffer && allocation);
16326 VMA_ASSERT(allocationLocalOffset + pBufferCreateInfo->size <= allocation->GetSize());
16327
16328 VMA_DEBUG_LOG("vmaCreateAliasingBuffer2");
16329
16330 *pBuffer = VK_NULL_HANDLE;
16331
16332 if (pBufferCreateInfo->size == 0)
16333 {
16334 return VK_ERROR_INITIALIZATION_FAILED;
16335 }
16336 if ((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
16337 !allocator->m_UseKhrBufferDeviceAddress)
16338 {
16339 VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
16340 return VK_ERROR_INITIALIZATION_FAILED;
16341 }
16342
16343 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16344
16345 // 1. Create VkBuffer.
16346 VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
16347 allocator->m_hDevice,
16348 pBufferCreateInfo,
16349 allocator->GetAllocationCallbacks(),
16350 pBuffer);
16351 if (res >= 0)
16352 {
16353 // 2. Bind buffer with memory.
16354 res = allocator->BindBufferMemory(hAllocation: allocation, allocationLocalOffset, hBuffer: *pBuffer, VMA_NULL);
16355 if (res >= 0)
16356 {
16357 return VK_SUCCESS;
16358 }
16359 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
16360 }
16361 return res;
16362}
16363
16364VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
16365 VmaAllocator allocator,
16366 VkBuffer buffer,
16367 VmaAllocation allocation)
16368{
16369 VMA_ASSERT(allocator);
16370
16371 if(buffer == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
16372 {
16373 return;
16374 }
16375
16376 VMA_DEBUG_LOG("vmaDestroyBuffer");
16377
16378 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16379
16380 if(buffer != VK_NULL_HANDLE)
16381 {
16382 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
16383 }
16384
16385 if(allocation != VK_NULL_HANDLE)
16386 {
16387 allocator->FreeMemory(
16388 allocationCount: 1, // allocationCount
16389 pAllocations: &allocation);
16390 }
16391}
16392
16393VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
16394 VmaAllocator allocator,
16395 const VkImageCreateInfo* pImageCreateInfo,
16396 const VmaAllocationCreateInfo* pAllocationCreateInfo,
16397 VkImage* pImage,
16398 VmaAllocation* pAllocation,
16399 VmaAllocationInfo* pAllocationInfo)
16400{
16401 VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
16402
16403 if(pImageCreateInfo->extent.width == 0 ||
16404 pImageCreateInfo->extent.height == 0 ||
16405 pImageCreateInfo->extent.depth == 0 ||
16406 pImageCreateInfo->mipLevels == 0 ||
16407 pImageCreateInfo->arrayLayers == 0)
16408 {
16409 return VK_ERROR_INITIALIZATION_FAILED;
16410 }
16411
16412 VMA_DEBUG_LOG("vmaCreateImage");
16413
16414 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16415
16416 *pImage = VK_NULL_HANDLE;
16417 *pAllocation = VK_NULL_HANDLE;
16418
16419 // 1. Create VkImage.
16420 VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
16421 allocator->m_hDevice,
16422 pImageCreateInfo,
16423 allocator->GetAllocationCallbacks(),
16424 pImage);
16425 if(res == VK_SUCCESS)
16426 {
16427 VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
16428 VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
16429 VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
16430
16431 // 2. Allocate memory using allocator.
16432 VkMemoryRequirements vkMemReq = {};
16433 bool requiresDedicatedAllocation = false;
16434 bool prefersDedicatedAllocation = false;
16435 allocator->GetImageMemoryRequirements(hImage: *pImage, memReq&: vkMemReq,
16436 requiresDedicatedAllocation, prefersDedicatedAllocation);
16437
16438 res = allocator->AllocateMemory(
16439 vkMemReq,
16440 requiresDedicatedAllocation,
16441 prefersDedicatedAllocation,
16442 VK_NULL_HANDLE, // dedicatedBuffer
16443 dedicatedImage: *pImage, // dedicatedImage
16444 dedicatedBufferImageUsage: VmaBufferImageUsage(*pImageCreateInfo), // dedicatedBufferImageUsage
16445 createInfo: *pAllocationCreateInfo,
16446 suballocType,
16447 allocationCount: 1, // allocationCount
16448 pAllocations: pAllocation);
16449
16450 if(res == VK_SUCCESS)
16451 {
16452 // 3. Bind image with memory.
16453 if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
16454 {
16455 res = allocator->BindImageMemory(hAllocation: *pAllocation, allocationLocalOffset: 0, hImage: *pImage, VMA_NULL);
16456 }
16457 if(res == VK_SUCCESS)
16458 {
16459 // All steps succeeded.
16460 #if VMA_STATS_STRING_ENABLED
16461 (*pAllocation)->InitImageUsage(createInfo: *pImageCreateInfo);
16462 #endif
16463 if(pAllocationInfo != VMA_NULL)
16464 {
16465 allocator->GetAllocationInfo(hAllocation: *pAllocation, pAllocationInfo);
16466 }
16467
16468 return VK_SUCCESS;
16469 }
16470 allocator->FreeMemory(
16471 allocationCount: 1, // allocationCount
16472 pAllocations: pAllocation);
16473 *pAllocation = VK_NULL_HANDLE;
16474 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
16475 *pImage = VK_NULL_HANDLE;
16476 return res;
16477 }
16478 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
16479 *pImage = VK_NULL_HANDLE;
16480 return res;
16481 }
16482 return res;
16483}
16484
16485VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
16486 VmaAllocator VMA_NOT_NULL allocator,
16487 VmaAllocation VMA_NOT_NULL allocation,
16488 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
16489 VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage)
16490{
16491 return vmaCreateAliasingImage2(allocator, allocation, allocationLocalOffset: 0, pImageCreateInfo, pImage);
16492}
16493
16494VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage2(
16495 VmaAllocator VMA_NOT_NULL allocator,
16496 VmaAllocation VMA_NOT_NULL allocation,
16497 VkDeviceSize allocationLocalOffset,
16498 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
16499 VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage)
16500{
16501 VMA_ASSERT(allocator && pImageCreateInfo && pImage && allocation);
16502
16503 *pImage = VK_NULL_HANDLE;
16504
16505 VMA_DEBUG_LOG("vmaCreateImage2");
16506
16507 if (pImageCreateInfo->extent.width == 0 ||
16508 pImageCreateInfo->extent.height == 0 ||
16509 pImageCreateInfo->extent.depth == 0 ||
16510 pImageCreateInfo->mipLevels == 0 ||
16511 pImageCreateInfo->arrayLayers == 0)
16512 {
16513 return VK_ERROR_INITIALIZATION_FAILED;
16514 }
16515
16516 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16517
16518 // 1. Create VkImage.
16519 VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
16520 allocator->m_hDevice,
16521 pImageCreateInfo,
16522 allocator->GetAllocationCallbacks(),
16523 pImage);
16524 if (res >= 0)
16525 {
16526 // 2. Bind image with memory.
16527 res = allocator->BindImageMemory(hAllocation: allocation, allocationLocalOffset, hImage: *pImage, VMA_NULL);
16528 if (res >= 0)
16529 {
16530 return VK_SUCCESS;
16531 }
16532 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
16533 }
16534 return res;
16535}
16536
16537VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
16538 VmaAllocator VMA_NOT_NULL allocator,
16539 VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
16540 VmaAllocation VMA_NULLABLE allocation)
16541{
16542 VMA_ASSERT(allocator);
16543
16544 if(image == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
16545 {
16546 return;
16547 }
16548
16549 VMA_DEBUG_LOG("vmaDestroyImage");
16550
16551 VMA_DEBUG_GLOBAL_MUTEX_LOCK
16552
16553 if(image != VK_NULL_HANDLE)
16554 {
16555 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
16556 }
16557 if(allocation != VK_NULL_HANDLE)
16558 {
16559 allocator->FreeMemory(
16560 allocationCount: 1, // allocationCount
16561 pAllocations: &allocation);
16562 }
16563}
16564
16565VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
16566 const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
16567 VmaVirtualBlock VMA_NULLABLE * VMA_NOT_NULL pVirtualBlock)
16568{
16569 VMA_ASSERT(pCreateInfo && pVirtualBlock);
16570 VMA_ASSERT(pCreateInfo->size > 0);
16571 VMA_DEBUG_LOG("vmaCreateVirtualBlock");
16572 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16573 *pVirtualBlock = vma_new(pCreateInfo->pAllocationCallbacks, VmaVirtualBlock_T)(*pCreateInfo);
16574 VkResult res = (*pVirtualBlock)->Init();
16575 if(res < 0)
16576 {
16577 vma_delete(pAllocationCallbacks: pCreateInfo->pAllocationCallbacks, ptr: *pVirtualBlock);
16578 *pVirtualBlock = VK_NULL_HANDLE;
16579 }
16580 return res;
16581}
16582
16583VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(VmaVirtualBlock VMA_NULLABLE virtualBlock)
16584{
16585 if(virtualBlock != VK_NULL_HANDLE)
16586 {
16587 VMA_DEBUG_LOG("vmaDestroyVirtualBlock");
16588 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16589 VkAllocationCallbacks allocationCallbacks = virtualBlock->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
16590 vma_delete(pAllocationCallbacks: &allocationCallbacks, ptr: virtualBlock);
16591 }
16592}
16593
16594VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
16595{
16596 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
16597 VMA_DEBUG_LOG("vmaIsVirtualBlockEmpty");
16598 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16599 return virtualBlock->IsEmpty() ? VK_TRUE : VK_FALSE;
16600}
16601
16602VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
16603 VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo)
16604{
16605 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pVirtualAllocInfo != VMA_NULL);
16606 VMA_DEBUG_LOG("vmaGetVirtualAllocationInfo");
16607 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16608 virtualBlock->GetAllocationInfo(allocation, outInfo&: *pVirtualAllocInfo);
16609}
16610
16611VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
16612 const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
16613 VkDeviceSize* VMA_NULLABLE pOffset)
16614{
16615 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pCreateInfo != VMA_NULL && pAllocation != VMA_NULL);
16616 VMA_DEBUG_LOG("vmaVirtualAllocate");
16617 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16618 return virtualBlock->Allocate(createInfo: *pCreateInfo, outAllocation&: *pAllocation, outOffset: pOffset);
16619}
16620
16621VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(VmaVirtualBlock VMA_NOT_NULL virtualBlock, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation)
16622{
16623 if(allocation != VK_NULL_HANDLE)
16624 {
16625 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
16626 VMA_DEBUG_LOG("vmaVirtualFree");
16627 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16628 virtualBlock->Free(allocation);
16629 }
16630}
16631
16632VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
16633{
16634 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
16635 VMA_DEBUG_LOG("vmaClearVirtualBlock");
16636 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16637 virtualBlock->Clear();
16638}
16639
16640VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
16641 VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, void* VMA_NULLABLE pUserData)
16642{
16643 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
16644 VMA_DEBUG_LOG("vmaSetVirtualAllocationUserData");
16645 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16646 virtualBlock->SetAllocationUserData(allocation, userData: pUserData);
16647}
16648
16649VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
16650 VmaStatistics* VMA_NOT_NULL pStats)
16651{
16652 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
16653 VMA_DEBUG_LOG("vmaGetVirtualBlockStatistics");
16654 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16655 virtualBlock->GetStatistics(outStats&: *pStats);
16656}
16657
16658VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
16659 VmaDetailedStatistics* VMA_NOT_NULL pStats)
16660{
16661 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
16662 VMA_DEBUG_LOG("vmaCalculateVirtualBlockStatistics");
16663 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16664 virtualBlock->CalculateDetailedStatistics(outStats&: *pStats);
16665}
16666
16667#if VMA_STATS_STRING_ENABLED
16668
16669VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
16670 char* VMA_NULLABLE * VMA_NOT_NULL ppStatsString, VkBool32 detailedMap)
16671{
16672 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && ppStatsString != VMA_NULL);
16673 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16674 const VkAllocationCallbacks* allocationCallbacks = virtualBlock->GetAllocationCallbacks();
16675 VmaStringBuilder sb(allocationCallbacks);
16676 virtualBlock->BuildStatsString(detailedMap: detailedMap != VK_FALSE, sb);
16677 *ppStatsString = VmaCreateStringCopy(allocs: allocationCallbacks, srcStr: sb.GetData(), strLen: sb.GetLength());
16678}
16679
16680VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
16681 char* VMA_NULLABLE pStatsString)
16682{
16683 if(pStatsString != VMA_NULL)
16684 {
16685 VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
16686 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16687 VmaFreeString(allocs: virtualBlock->GetAllocationCallbacks(), str: pStatsString);
16688 }
16689}
16690#if VMA_EXTERNAL_MEMORY_WIN32
16691VMA_CALL_PRE VkResult VMA_CALL_POST vmaGetMemoryWin32Handle(VmaAllocator VMA_NOT_NULL allocator,
16692 VmaAllocation VMA_NOT_NULL allocation, HANDLE hTargetProcess, HANDLE* VMA_NOT_NULL pHandle)
16693{
16694 VMA_ASSERT(allocator && allocation && pHandle);
16695 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
16696 return allocation->GetWin32Handle(allocator, hTargetProcess, pHandle);
16697}
16698#endif // VMA_EXTERNAL_MEMORY_WIN32
16699#endif // VMA_STATS_STRING_ENABLED
16700#endif // _VMA_PUBLIC_INTERFACE
16701
16702#if defined(__GNUC__) && !defined(__clang__)
16703#pragma GCC diagnostic pop
16704#elif defined(__clang__)
16705#pragma clang diagnostic pop
16706#endif
16707
16708#endif // VMA_IMPLEMENTATION
16709
16710/**
16711\page quick_start Quick start
16712
16713\section quick_start_project_setup Project setup
16714
16715Vulkan Memory Allocator comes in form of a "stb-style" single header file.
16716While you can pull the entire repository e.g. as Git module, there is also Cmake script provided,
16717you don't need to build it as a separate library project.
16718You can add file "vk_mem_alloc.h" directly to your project and submit it to code repository next to your other source files.
16719
16720"Single header" doesn't mean that everything is contained in C/C++ declarations,
16721like it tends to be in case of inline functions or C++ templates.
16722It means that implementation is bundled with interface in a single file and needs to be extracted using preprocessor macro.
16723If you don't do it properly, it will result in linker errors.
16724
16725To do it properly:
16726
16727-# Include "vk_mem_alloc.h" file in each CPP file where you want to use the library.
16728 This includes declarations of all members of the library.
16729-# In exactly one CPP file define following macro before this include.
16730 It enables also internal definitions.
16731
16732\code
16733#define VMA_IMPLEMENTATION
16734#include "vk_mem_alloc.h"
16735\endcode
16736
16737It may be a good idea to create dedicated CPP file just for this purpose, e.g. "VmaUsage.cpp".
16738
16739This library includes header `<vulkan/vulkan.h>`, which in turn
16740includes `<windows.h>` on Windows. If you need some specific macros defined
16741before including these headers (like `WIN32_LEAN_AND_MEAN` or
16742`WINVER` for Windows, `VK_USE_PLATFORM_WIN32_KHR` for Vulkan), you must define
16743them before every `#include` of this library.
16744It may be a good idea to create a dedicate header file for this purpose, e.g. "VmaUsage.h",
16745that will be included in other source files instead of VMA header directly.
16746
16747This library is written in C++, but has C-compatible interface.
16748Thus, you can include and use "vk_mem_alloc.h" in C or C++ code, but full
16749implementation with `VMA_IMPLEMENTATION` macro must be compiled as C++, NOT as C.
16750Some features of C++14 are used and required. Features of C++20 are used optionally when available.
16751Some headers of standard C and C++ library are used, but STL containers, RTTI, or C++ exceptions are not used.
16752
16753
16754\section quick_start_initialization Initialization
16755
16756VMA offers library interface in a style similar to Vulkan, with object handles like #VmaAllocation,
16757structures describing parameters of objects to be created like #VmaAllocationCreateInfo,
16758and errors codes returned from functions using `VkResult` type.
16759
16760The first and the main object that needs to be created is #VmaAllocator.
16761It represents the initialization of the entire library.
16762Only one such object should be created per `VkDevice`.
16763You should create it at program startup, after `VkDevice` was created, and before any device memory allocator needs to be made.
16764It must be destroyed before `VkDevice` is destroyed.
16765
16766At program startup:
16767
16768-# Initialize Vulkan to have `VkInstance`, `VkPhysicalDevice`, `VkDevice` object.
16769-# Fill VmaAllocatorCreateInfo structure and call vmaCreateAllocator() to create #VmaAllocator object.
16770
16771Only members `physicalDevice`, `device`, `instance` are required.
16772However, you should inform the library which Vulkan version do you use by setting
16773VmaAllocatorCreateInfo::vulkanApiVersion and which extensions did you enable
16774by setting VmaAllocatorCreateInfo::flags.
16775Otherwise, VMA would use only features of Vulkan 1.0 core with no extensions.
16776See below for details.
16777
16778\subsection quick_start_initialization_selecting_vulkan_version Selecting Vulkan version
16779
16780VMA supports Vulkan version down to 1.0, for backward compatibility.
16781If you want to use higher version, you need to inform the library about it.
16782This is a two-step process.
16783
16784<b>Step 1: Compile time.</b> By default, VMA compiles with code supporting the highest
16785Vulkan version found in the included `<vulkan/vulkan.h>` that is also supported by the library.
16786If this is OK, you don't need to do anything.
16787However, if you want to compile VMA as if only some lower Vulkan version was available,
16788define macro `VMA_VULKAN_VERSION` before every `#include "vk_mem_alloc.h"`.
16789It should have decimal numeric value in form of ABBBCCC, where A = major, BBB = minor, CCC = patch Vulkan version.
16790For example, to compile against Vulkan 1.2:
16791
16792\code
16793#define VMA_VULKAN_VERSION 1002000 // Vulkan 1.2
16794#include "vk_mem_alloc.h"
16795\endcode
16796
16797<b>Step 2: Runtime.</b> Even when compiled with higher Vulkan version available,
16798VMA can use only features of a lower version, which is configurable during creation of the #VmaAllocator object.
16799By default, only Vulkan 1.0 is used.
16800To initialize the allocator with support for higher Vulkan version, you need to set member
16801VmaAllocatorCreateInfo::vulkanApiVersion to an appropriate value, e.g. using constants like `VK_API_VERSION_1_2`.
16802See code sample below.
16803
16804\subsection quick_start_initialization_importing_vulkan_functions Importing Vulkan functions
16805
16806You may need to configure importing Vulkan functions. There are 3 ways to do this:
16807
16808-# **If you link with Vulkan static library** (e.g. "vulkan-1.lib" on Windows):
16809 - You don't need to do anything.
16810 - VMA will use these, as macro `VMA_STATIC_VULKAN_FUNCTIONS` is defined to 1 by default.
16811-# **If you want VMA to fetch pointers to Vulkan functions dynamically** using `vkGetInstanceProcAddr`,
16812 `vkGetDeviceProcAddr` (this is the option presented in the example below):
16813 - Define `VMA_STATIC_VULKAN_FUNCTIONS` to 0, `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 1.
16814 - Provide pointers to these two functions via VmaVulkanFunctions::vkGetInstanceProcAddr,
16815 VmaVulkanFunctions::vkGetDeviceProcAddr.
16816 - The library will fetch pointers to all other functions it needs internally.
16817-# **If you fetch pointers to all Vulkan functions in a custom way**, e.g. using some loader like
16818 [Volk](https://github.com/zeux/volk):
16819 - Define `VMA_STATIC_VULKAN_FUNCTIONS` and `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 0.
16820 - Pass these pointers via structure #VmaVulkanFunctions.
16821
16822\subsection quick_start_initialization_enabling_extensions Enabling extensions
16823
16824VMA can automatically use following Vulkan extensions.
16825If you found them available on the selected physical device and you enabled them
16826while creating `VkInstance` / `VkDevice` object, inform VMA about their availability
16827by setting appropriate flags in VmaAllocatorCreateInfo::flags.
16828
16829Vulkan extension | VMA flag
16830------------------------------|-----------------------------------------------------
16831VK_KHR_dedicated_allocation | #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT
16832VK_KHR_bind_memory2 | #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT
16833VK_KHR_maintenance4 | #VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT
16834VK_KHR_maintenance5 | #VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT
16835VK_EXT_memory_budget | #VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT
16836VK_KHR_buffer_device_address | #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
16837VK_EXT_memory_priority | #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
16838VK_AMD_device_coherent_memory | #VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
16839VK_KHR_external_memory_win32 | #VMA_ALLOCATOR_CREATE_KHR_EXTERNAL_MEMORY_WIN32_BIT
16840
16841Example with fetching pointers to Vulkan functions dynamically:
16842
16843\code
16844#define VMA_STATIC_VULKAN_FUNCTIONS 0
16845#define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
16846#include "vk_mem_alloc.h"
16847
16848...
16849
16850VmaVulkanFunctions vulkanFunctions = {};
16851vulkanFunctions.vkGetInstanceProcAddr = &vkGetInstanceProcAddr;
16852vulkanFunctions.vkGetDeviceProcAddr = &vkGetDeviceProcAddr;
16853
16854VmaAllocatorCreateInfo allocatorCreateInfo = {};
16855allocatorCreateInfo.flags = VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT;
16856allocatorCreateInfo.vulkanApiVersion = VK_API_VERSION_1_2;
16857allocatorCreateInfo.physicalDevice = physicalDevice;
16858allocatorCreateInfo.device = device;
16859allocatorCreateInfo.instance = instance;
16860allocatorCreateInfo.pVulkanFunctions = &vulkanFunctions;
16861
16862VmaAllocator allocator;
16863vmaCreateAllocator(&allocatorCreateInfo, &allocator);
16864
16865// Entire program...
16866
16867// At the end, don't forget to:
16868vmaDestroyAllocator(allocator);
16869\endcode
16870
16871
16872\subsection quick_start_initialization_other_config Other configuration options
16873
16874There are additional configuration options available through preprocessor macros that you can define
16875before including VMA header and through parameters passed in #VmaAllocatorCreateInfo.
16876They include a possibility to use your own callbacks for host memory allocations (`VkAllocationCallbacks`),
16877callbacks for device memory allocations (instead of `vkAllocateMemory`, `vkFreeMemory`),
16878or your custom `VMA_ASSERT` macro, among others.
16879For more information, see: @ref configuration.
16880
16881
16882\section quick_start_resource_allocation Resource allocation
16883
16884When you want to create a buffer or image:
16885
16886-# Fill `VkBufferCreateInfo` / `VkImageCreateInfo` structure.
16887-# Fill VmaAllocationCreateInfo structure.
16888-# Call vmaCreateBuffer() / vmaCreateImage() to get `VkBuffer`/`VkImage` with memory
16889 already allocated and bound to it, plus #VmaAllocation objects that represents its underlying memory.
16890
16891\code
16892VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
16893bufferInfo.size = 65536;
16894bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
16895
16896VmaAllocationCreateInfo allocInfo = {};
16897allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
16898
16899VkBuffer buffer;
16900VmaAllocation allocation;
16901vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
16902\endcode
16903
16904Don't forget to destroy your buffer and allocation objects when no longer needed:
16905
16906\code
16907vmaDestroyBuffer(allocator, buffer, allocation);
16908\endcode
16909
16910If you need to map the buffer, you must set flag
16911#VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
16912in VmaAllocationCreateInfo::flags.
16913There are many additional parameters that can control the choice of memory type to be used for the allocation
16914and other features.
16915For more information, see documentation chapters: @ref choosing_memory_type, @ref memory_mapping.
16916
16917
16918\page choosing_memory_type Choosing memory type
16919
16920Physical devices in Vulkan support various combinations of memory heaps and
16921types. Help with choosing correct and optimal memory type for your specific
16922resource is one of the key features of this library. You can use it by filling
16923appropriate members of VmaAllocationCreateInfo structure, as described below.
16924You can also combine multiple methods.
16925
16926-# If you just want to find memory type index that meets your requirements, you
16927 can use function: vmaFindMemoryTypeIndexForBufferInfo(),
16928 vmaFindMemoryTypeIndexForImageInfo(), vmaFindMemoryTypeIndex().
16929-# If you want to allocate a region of device memory without association with any
16930 specific image or buffer, you can use function vmaAllocateMemory(). Usage of
16931 this function is not recommended and usually not needed.
16932 vmaAllocateMemoryPages() function is also provided for creating multiple allocations at once,
16933 which may be useful for sparse binding.
16934-# If you already have a buffer or an image created, you want to allocate memory
16935 for it and then you will bind it yourself, you can use function
16936 vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage().
16937 For binding you should use functions: vmaBindBufferMemory(), vmaBindImageMemory()
16938 or their extended versions: vmaBindBufferMemory2(), vmaBindImageMemory2().
16939-# If you want to create a buffer or an image, allocate memory for it, and bind
16940 them together, all in one call, you can use function vmaCreateBuffer(),
16941 vmaCreateImage().
16942 <b>This is the easiest and recommended way to use this library!</b>
16943
16944When using 3. or 4., the library internally queries Vulkan for memory types
16945supported for that buffer or image (function `vkGetBufferMemoryRequirements()`)
16946and uses only one of these types.
16947
16948If no memory type can be found that meets all the requirements, these functions
16949return `VK_ERROR_FEATURE_NOT_PRESENT`.
16950
16951You can leave VmaAllocationCreateInfo structure completely filled with zeros.
16952It means no requirements are specified for memory type.
16953It is valid, although not very useful.
16954
16955\section choosing_memory_type_usage Usage
16956
16957The easiest way to specify memory requirements is to fill member
16958VmaAllocationCreateInfo::usage using one of the values of enum #VmaMemoryUsage.
16959It defines high level, common usage types.
16960Since version 3 of the library, it is recommended to use #VMA_MEMORY_USAGE_AUTO to let it select best memory type for your resource automatically.
16961
16962For example, if you want to create a uniform buffer that will be filled using
16963transfer only once or infrequently and then used for rendering every frame as a uniform buffer, you can
16964do it using following code. The buffer will most likely end up in a memory type with
16965`VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT` to be fast to access by the GPU device.
16966
16967\code
16968VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
16969bufferInfo.size = 65536;
16970bufferInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
16971
16972VmaAllocationCreateInfo allocInfo = {};
16973allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
16974
16975VkBuffer buffer;
16976VmaAllocation allocation;
16977vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
16978\endcode
16979
16980If you have a preference for putting the resource in GPU (device) memory or CPU (host) memory
16981on systems with discrete graphics card that have the memories separate, you can use
16982#VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST.
16983
16984When using `VMA_MEMORY_USAGE_AUTO*` while you want to map the allocated memory,
16985you also need to specify one of the host access flags:
16986#VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
16987This will help the library decide about preferred memory type to ensure it has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
16988so you can map it.
16989
16990For example, a staging buffer that will be filled via mapped pointer and then
16991used as a source of transfer to the buffer described previously can be created like this.
16992It will likely end up in a memory type that is `HOST_VISIBLE` and `HOST_COHERENT`
16993but not `HOST_CACHED` (meaning uncached, write-combined) and not `DEVICE_LOCAL` (meaning system RAM).
16994
16995\code
16996VkBufferCreateInfo stagingBufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
16997stagingBufferInfo.size = 65536;
16998stagingBufferInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
16999
17000VmaAllocationCreateInfo stagingAllocInfo = {};
17001stagingAllocInfo.usage = VMA_MEMORY_USAGE_AUTO;
17002stagingAllocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT;
17003
17004VkBuffer stagingBuffer;
17005VmaAllocation stagingAllocation;
17006vmaCreateBuffer(allocator, &stagingBufferInfo, &stagingAllocInfo, &stagingBuffer, &stagingAllocation, nullptr);
17007\endcode
17008
17009For more examples of creating different kinds of resources, see chapter \ref usage_patterns.
17010See also: @ref memory_mapping.
17011
17012Usage values `VMA_MEMORY_USAGE_AUTO*` are legal to use only when the library knows
17013about the resource being created by having `VkBufferCreateInfo` / `VkImageCreateInfo` passed,
17014so they work with functions like: vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo() etc.
17015If you allocate raw memory using function vmaAllocateMemory(), you have to use other means of selecting
17016memory type, as described below.
17017
17018\note
17019Old usage values (`VMA_MEMORY_USAGE_GPU_ONLY`, `VMA_MEMORY_USAGE_CPU_ONLY`,
17020`VMA_MEMORY_USAGE_CPU_TO_GPU`, `VMA_MEMORY_USAGE_GPU_TO_CPU`, `VMA_MEMORY_USAGE_CPU_COPY`)
17021are still available and work same way as in previous versions of the library
17022for backward compatibility, but they are deprecated.
17023
17024\section choosing_memory_type_required_preferred_flags Required and preferred flags
17025
17026You can specify more detailed requirements by filling members
17027VmaAllocationCreateInfo::requiredFlags and VmaAllocationCreateInfo::preferredFlags
17028with a combination of bits from enum `VkMemoryPropertyFlags`. For example,
17029if you want to create a buffer that will be persistently mapped on host (so it
17030must be `HOST_VISIBLE`) and preferably will also be `HOST_COHERENT` and `HOST_CACHED`,
17031use following code:
17032
17033\code
17034VmaAllocationCreateInfo allocInfo = {};
17035allocInfo.requiredFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
17036allocInfo.preferredFlags = VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
17037allocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT | VMA_ALLOCATION_CREATE_MAPPED_BIT;
17038
17039VkBuffer buffer;
17040VmaAllocation allocation;
17041vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
17042\endcode
17043
17044A memory type is chosen that has all the required flags and as many preferred
17045flags set as possible.
17046
17047Value passed in VmaAllocationCreateInfo::usage is internally converted to a set of required and preferred flags,
17048plus some extra "magic" (heuristics).
17049
17050\section choosing_memory_type_explicit_memory_types Explicit memory types
17051
17052If you inspected memory types available on the physical device and <b>you have
17053a preference for memory types that you want to use</b>, you can fill member
17054VmaAllocationCreateInfo::memoryTypeBits. It is a bit mask, where each bit set
17055means that a memory type with that index is allowed to be used for the
17056allocation. Special value 0, just like `UINT32_MAX`, means there are no
17057restrictions to memory type index.
17058
17059Please note that this member is NOT just a memory type index.
17060Still you can use it to choose just one, specific memory type.
17061For example, if you already determined that your buffer should be created in
17062memory type 2, use following code:
17063
17064\code
17065uint32_t memoryTypeIndex = 2;
17066
17067VmaAllocationCreateInfo allocInfo = {};
17068allocInfo.memoryTypeBits = 1u << memoryTypeIndex;
17069
17070VkBuffer buffer;
17071VmaAllocation allocation;
17072vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
17073\endcode
17074
17075You can also use this parameter to <b>exclude some memory types</b>.
17076If you inspect memory heaps and types available on the current physical device and
17077you determine that for some reason you don't want to use a specific memory type for the allocation,
17078you can enable automatic memory type selection but exclude certain memory type or types
17079by setting all bits of `memoryTypeBits` to 1 except the ones you choose.
17080
17081\code
17082// ...
17083uint32_t excludedMemoryTypeIndex = 2;
17084VmaAllocationCreateInfo allocInfo = {};
17085allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
17086allocInfo.memoryTypeBits = ~(1u << excludedMemoryTypeIndex);
17087// ...
17088\endcode
17089
17090
17091\section choosing_memory_type_custom_memory_pools Custom memory pools
17092
17093If you allocate from custom memory pool, all the ways of specifying memory
17094requirements described above are not applicable and the aforementioned members
17095of VmaAllocationCreateInfo structure are ignored. Memory type is selected
17096explicitly when creating the pool and then used to make all the allocations from
17097that pool. For further details, see \ref custom_memory_pools.
17098
17099\section choosing_memory_type_dedicated_allocations Dedicated allocations
17100
17101Memory for allocations is reserved out of larger block of `VkDeviceMemory`
17102allocated from Vulkan internally. That is the main feature of this whole library.
17103You can still request a separate memory block to be created for an allocation,
17104just like you would do in a trivial solution without using any allocator.
17105In that case, a buffer or image is always bound to that memory at offset 0.
17106This is called a "dedicated allocation".
17107You can explicitly request it by using flag #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
17108The library can also internally decide to use dedicated allocation in some cases, e.g.:
17109
17110- When the size of the allocation is large.
17111- When [VK_KHR_dedicated_allocation](@ref vk_khr_dedicated_allocation) extension is enabled
17112 and it reports that dedicated allocation is required or recommended for the resource.
17113- When allocation of next big memory block fails due to not enough device memory,
17114 but allocation with the exact requested size succeeds.
17115
17116
17117\page memory_mapping Memory mapping
17118
17119To "map memory" in Vulkan means to obtain a CPU pointer to `VkDeviceMemory`,
17120to be able to read from it or write to it in CPU code.
17121Mapping is possible only of memory allocated from a memory type that has
17122`VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` flag.
17123Functions `vkMapMemory()`, `vkUnmapMemory()` are designed for this purpose.
17124You can use them directly with memory allocated by this library,
17125but it is not recommended because of following issue:
17126Mapping the same `VkDeviceMemory` block multiple times is illegal - only one mapping at a time is allowed.
17127This includes mapping disjoint regions. Mapping is not reference-counted internally by Vulkan.
17128It is also not thread-safe.
17129Because of this, Vulkan Memory Allocator provides following facilities:
17130
17131\note If you want to be able to map an allocation, you need to specify one of the flags
17132#VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
17133in VmaAllocationCreateInfo::flags. These flags are required for an allocation to be mappable
17134when using #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` enum values.
17135For other usage values they are ignored and every such allocation made in `HOST_VISIBLE` memory type is mappable,
17136but these flags can still be used for consistency.
17137
17138\section memory_mapping_copy_functions Copy functions
17139
17140The easiest way to copy data from a host pointer to an allocation is to use convenience function vmaCopyMemoryToAllocation().
17141It automatically maps the Vulkan memory temporarily (if not already mapped), performs `memcpy`,
17142and calls `vkFlushMappedMemoryRanges` (if required - if memory type is not `HOST_COHERENT`).
17143
17144It is also the safest one, because using `memcpy` avoids a risk of accidentally introducing memory reads
17145(e.g. by doing `pMappedVectors[i] += v`), which may be very slow on memory types that are not `HOST_CACHED`.
17146
17147\code
17148struct ConstantBuffer
17149{
17150 ...
17151};
17152ConstantBuffer constantBufferData = ...
17153
17154VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17155bufCreateInfo.size = sizeof(ConstantBuffer);
17156bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
17157
17158VmaAllocationCreateInfo allocCreateInfo = {};
17159allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
17160allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT;
17161
17162VkBuffer buf;
17163VmaAllocation alloc;
17164vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, nullptr);
17165
17166vmaCopyMemoryToAllocation(allocator, &constantBufferData, alloc, 0, sizeof(ConstantBuffer));
17167\endcode
17168
17169Copy in the other direction - from an allocation to a host pointer can be performed the same way using function vmaCopyAllocationToMemory().
17170
17171\section memory_mapping_mapping_functions Mapping functions
17172
17173The library provides following functions for mapping of a specific allocation: vmaMapMemory(), vmaUnmapMemory().
17174They are safer and more convenient to use than standard Vulkan functions.
17175You can map an allocation multiple times simultaneously - mapping is reference-counted internally.
17176You can also map different allocations simultaneously regardless of whether they use the same `VkDeviceMemory` block.
17177The way it is implemented is that the library always maps entire memory block, not just region of the allocation.
17178For further details, see description of vmaMapMemory() function.
17179Example:
17180
17181\code
17182// Having these objects initialized:
17183struct ConstantBuffer
17184{
17185 ...
17186};
17187ConstantBuffer constantBufferData = ...
17188
17189VmaAllocator allocator = ...
17190VkBuffer constantBuffer = ...
17191VmaAllocation constantBufferAllocation = ...
17192
17193// You can map and fill your buffer using following code:
17194
17195void* mappedData;
17196vmaMapMemory(allocator, constantBufferAllocation, &mappedData);
17197memcpy(mappedData, &constantBufferData, sizeof(constantBufferData));
17198vmaUnmapMemory(allocator, constantBufferAllocation);
17199\endcode
17200
17201When mapping, you may see a warning from Vulkan validation layer similar to this one:
17202
17203<i>Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.</i>
17204
17205It happens because the library maps entire `VkDeviceMemory` block, where different
17206types of images and buffers may end up together, especially on GPUs with unified memory like Intel.
17207You can safely ignore it if you are sure you access only memory of the intended
17208object that you wanted to map.
17209
17210
17211\section memory_mapping_persistently_mapped_memory Persistently mapped memory
17212
17213Keeping your memory persistently mapped is generally OK in Vulkan.
17214You don't need to unmap it before using its data on the GPU.
17215The library provides a special feature designed for that:
17216Allocations made with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag set in
17217VmaAllocationCreateInfo::flags stay mapped all the time,
17218so you can just access CPU pointer to it any time
17219without a need to call any "map" or "unmap" function.
17220Example:
17221
17222\code
17223VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17224bufCreateInfo.size = sizeof(ConstantBuffer);
17225bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
17226
17227VmaAllocationCreateInfo allocCreateInfo = {};
17228allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
17229allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
17230 VMA_ALLOCATION_CREATE_MAPPED_BIT;
17231
17232VkBuffer buf;
17233VmaAllocation alloc;
17234VmaAllocationInfo allocInfo;
17235vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
17236
17237// Buffer is already mapped. You can access its memory.
17238memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
17239\endcode
17240
17241\note #VMA_ALLOCATION_CREATE_MAPPED_BIT by itself doesn't guarantee that the allocation will end up
17242in a mappable memory type.
17243For this, you need to also specify #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or
17244#VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
17245#VMA_ALLOCATION_CREATE_MAPPED_BIT only guarantees that if the memory is `HOST_VISIBLE`, the allocation will be mapped on creation.
17246For an example of how to make use of this fact, see section \ref usage_patterns_advanced_data_uploading.
17247
17248\section memory_mapping_cache_control Cache flush and invalidate
17249
17250Memory in Vulkan doesn't need to be unmapped before using it on GPU,
17251but unless a memory types has `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT` flag set,
17252you need to manually **invalidate** cache before reading of mapped pointer
17253and **flush** cache after writing to mapped pointer.
17254Map/unmap operations don't do that automatically.
17255Vulkan provides following functions for this purpose `vkFlushMappedMemoryRanges()`,
17256`vkInvalidateMappedMemoryRanges()`, but this library provides more convenient
17257functions that refer to given allocation object: vmaFlushAllocation(),
17258vmaInvalidateAllocation(),
17259or multiple objects at once: vmaFlushAllocations(), vmaInvalidateAllocations().
17260
17261Regions of memory specified for flush/invalidate must be aligned to
17262`VkPhysicalDeviceLimits::nonCoherentAtomSize`. This is automatically ensured by the library.
17263In any memory type that is `HOST_VISIBLE` but not `HOST_COHERENT`, all allocations
17264within blocks are aligned to this value, so their offsets are always multiply of
17265`nonCoherentAtomSize` and two different allocations never share same "line" of this size.
17266
17267Also, Windows drivers from all 3 PC GPU vendors (AMD, Intel, NVIDIA)
17268currently provide `HOST_COHERENT` flag on all memory types that are
17269`HOST_VISIBLE`, so on PC you may not need to bother.
17270
17271
17272\page staying_within_budget Staying within budget
17273
17274When developing a graphics-intensive game or program, it is important to avoid allocating
17275more GPU memory than it is physically available. When the memory is over-committed,
17276various bad things can happen, depending on the specific GPU, graphics driver, and
17277operating system:
17278
17279- It may just work without any problems.
17280- The application may slow down because some memory blocks are moved to system RAM
17281 and the GPU has to access them through PCI Express bus.
17282- A new allocation may take very long time to complete, even few seconds, and possibly
17283 freeze entire system.
17284- The new allocation may fail with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
17285- It may even result in GPU crash (TDR), observed as `VK_ERROR_DEVICE_LOST`
17286 returned somewhere later.
17287
17288\section staying_within_budget_querying_for_budget Querying for budget
17289
17290To query for current memory usage and available budget, use function vmaGetHeapBudgets().
17291Returned structure #VmaBudget contains quantities expressed in bytes, per Vulkan memory heap.
17292
17293Please note that this function returns different information and works faster than
17294vmaCalculateStatistics(). vmaGetHeapBudgets() can be called every frame or even before every
17295allocation, while vmaCalculateStatistics() is intended to be used rarely,
17296only to obtain statistical information, e.g. for debugging purposes.
17297
17298It is recommended to use <b>VK_EXT_memory_budget</b> device extension to obtain information
17299about the budget from Vulkan device. VMA is able to use this extension automatically.
17300When not enabled, the allocator behaves same way, but then it estimates current usage
17301and available budget based on its internal information and Vulkan memory heap sizes,
17302which may be less precise. In order to use this extension:
17303
173041. Make sure extensions VK_EXT_memory_budget and VK_KHR_get_physical_device_properties2
17305 required by it are available and enable them. Please note that the first is a device
17306 extension and the second is instance extension!
173072. Use flag #VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT when creating #VmaAllocator object.
173083. Make sure to call vmaSetCurrentFrameIndex() every frame. Budget is queried from
17309 Vulkan inside of it to avoid overhead of querying it with every allocation.
17310
17311\section staying_within_budget_controlling_memory_usage Controlling memory usage
17312
17313There are many ways in which you can try to stay within the budget.
17314
17315First, when making new allocation requires allocating a new memory block, the library
17316tries not to exceed the budget automatically. If a block with default recommended size
17317(e.g. 256 MB) would go over budget, a smaller block is allocated, possibly even
17318dedicated memory for just this resource.
17319
17320If the size of the requested resource plus current memory usage is more than the
17321budget, by default the library still tries to create it, leaving it to the Vulkan
17322implementation whether the allocation succeeds or fails. You can change this behavior
17323by using #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag. With it, the allocation is
17324not made if it would exceed the budget or if the budget is already exceeded.
17325VMA then tries to make the allocation from the next eligible Vulkan memory type.
17326If all of them fail, the call then fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
17327Example usage pattern may be to pass the #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag
17328when creating resources that are not essential for the application (e.g. the texture
17329of a specific object) and not to pass it when creating critically important resources
17330(e.g. render targets).
17331
17332On AMD graphics cards there is a custom vendor extension available: <b>VK_AMD_memory_overallocation_behavior</b>
17333that allows to control the behavior of the Vulkan implementation in out-of-memory cases -
17334whether it should fail with an error code or still allow the allocation.
17335Usage of this extension involves only passing extra structure on Vulkan device creation,
17336so it is out of scope of this library.
17337
17338Finally, you can also use #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT flag to make sure
17339a new allocation is created only when it fits inside one of the existing memory blocks.
17340If it would require to allocate a new block, if fails instead with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
17341This also ensures that the function call is very fast because it never goes to Vulkan
17342to obtain a new block.
17343
17344\note Creating \ref custom_memory_pools with VmaPoolCreateInfo::minBlockCount
17345set to more than 0 will currently try to allocate memory blocks without checking whether they
17346fit within budget.
17347
17348
17349\page resource_aliasing Resource aliasing (overlap)
17350
17351New explicit graphics APIs (Vulkan and Direct3D 12), thanks to manual memory
17352management, give an opportunity to alias (overlap) multiple resources in the
17353same region of memory - a feature not available in the old APIs (Direct3D 11, OpenGL).
17354It can be useful to save video memory, but it must be used with caution.
17355
17356For example, if you know the flow of your whole render frame in advance, you
17357are going to use some intermediate textures or buffers only during a small range of render passes,
17358and you know these ranges don't overlap in time, you can bind these resources to
17359the same place in memory, even if they have completely different parameters (width, height, format etc.).
17360
17361![Resource aliasing (overlap)](../gfx/Aliasing.png)
17362
17363Such scenario is possible using VMA, but you need to create your images manually.
17364Then you need to calculate parameters of an allocation to be made using formula:
17365
17366- allocation size = max(size of each image)
17367- allocation alignment = max(alignment of each image)
17368- allocation memoryTypeBits = bitwise AND(memoryTypeBits of each image)
17369
17370Following example shows two different images bound to the same place in memory,
17371allocated to fit largest of them.
17372
17373\code
17374// A 512x512 texture to be sampled.
17375VkImageCreateInfo img1CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
17376img1CreateInfo.imageType = VK_IMAGE_TYPE_2D;
17377img1CreateInfo.extent.width = 512;
17378img1CreateInfo.extent.height = 512;
17379img1CreateInfo.extent.depth = 1;
17380img1CreateInfo.mipLevels = 10;
17381img1CreateInfo.arrayLayers = 1;
17382img1CreateInfo.format = VK_FORMAT_R8G8B8A8_SRGB;
17383img1CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
17384img1CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
17385img1CreateInfo.usage = VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT;
17386img1CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
17387
17388// A full screen texture to be used as color attachment.
17389VkImageCreateInfo img2CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
17390img2CreateInfo.imageType = VK_IMAGE_TYPE_2D;
17391img2CreateInfo.extent.width = 1920;
17392img2CreateInfo.extent.height = 1080;
17393img2CreateInfo.extent.depth = 1;
17394img2CreateInfo.mipLevels = 1;
17395img2CreateInfo.arrayLayers = 1;
17396img2CreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
17397img2CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
17398img2CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
17399img2CreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
17400img2CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
17401
17402VkImage img1;
17403res = vkCreateImage(device, &img1CreateInfo, nullptr, &img1);
17404VkImage img2;
17405res = vkCreateImage(device, &img2CreateInfo, nullptr, &img2);
17406
17407VkMemoryRequirements img1MemReq;
17408vkGetImageMemoryRequirements(device, img1, &img1MemReq);
17409VkMemoryRequirements img2MemReq;
17410vkGetImageMemoryRequirements(device, img2, &img2MemReq);
17411
17412VkMemoryRequirements finalMemReq = {};
17413finalMemReq.size = std::max(img1MemReq.size, img2MemReq.size);
17414finalMemReq.alignment = std::max(img1MemReq.alignment, img2MemReq.alignment);
17415finalMemReq.memoryTypeBits = img1MemReq.memoryTypeBits & img2MemReq.memoryTypeBits;
17416// Validate if(finalMemReq.memoryTypeBits != 0)
17417
17418VmaAllocationCreateInfo allocCreateInfo = {};
17419allocCreateInfo.preferredFlags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
17420
17421VmaAllocation alloc;
17422res = vmaAllocateMemory(allocator, &finalMemReq, &allocCreateInfo, &alloc, nullptr);
17423
17424res = vmaBindImageMemory(allocator, alloc, img1);
17425res = vmaBindImageMemory(allocator, alloc, img2);
17426
17427// You can use img1, img2 here, but not at the same time!
17428
17429vmaFreeMemory(allocator, alloc);
17430vkDestroyImage(allocator, img2, nullptr);
17431vkDestroyImage(allocator, img1, nullptr);
17432\endcode
17433
17434VMA also provides convenience functions that create a buffer or image and bind it to memory
17435represented by an existing #VmaAllocation:
17436vmaCreateAliasingBuffer(), vmaCreateAliasingBuffer2(),
17437vmaCreateAliasingImage(), vmaCreateAliasingImage2().
17438Versions with "2" offer additional parameter `allocationLocalOffset`.
17439
17440Remember that using resources that alias in memory requires proper synchronization.
17441You need to issue a memory barrier to make sure commands that use `img1` and `img2`
17442don't overlap on GPU timeline.
17443You also need to treat a resource after aliasing as uninitialized - containing garbage data.
17444For example, if you use `img1` and then want to use `img2`, you need to issue
17445an image memory barrier for `img2` with `oldLayout` = `VK_IMAGE_LAYOUT_UNDEFINED`.
17446
17447Additional considerations:
17448
17449- Vulkan also allows to interpret contents of memory between aliasing resources consistently in some cases.
17450See chapter 11.8. "Memory Aliasing" of Vulkan specification or `VK_IMAGE_CREATE_ALIAS_BIT` flag.
17451- You can create more complex layout where different images and buffers are bound
17452at different offsets inside one large allocation. For example, one can imagine
17453a big texture used in some render passes, aliasing with a set of many small buffers
17454used between in some further passes. To bind a resource at non-zero offset in an allocation,
17455use vmaBindBufferMemory2() / vmaBindImageMemory2().
17456- Before allocating memory for the resources you want to alias, check `memoryTypeBits`
17457returned in memory requirements of each resource to make sure the bits overlap.
17458Some GPUs may expose multiple memory types suitable e.g. only for buffers or
17459images with `COLOR_ATTACHMENT` usage, so the sets of memory types supported by your
17460resources may be disjoint. Aliasing them is not possible in that case.
17461
17462
17463\page custom_memory_pools Custom memory pools
17464
17465A memory pool contains a number of `VkDeviceMemory` blocks.
17466The library automatically creates and manages default pool for each memory type available on the device.
17467Default memory pool automatically grows in size.
17468Size of allocated blocks is also variable and managed automatically.
17469You are using default pools whenever you leave VmaAllocationCreateInfo::pool = null.
17470
17471You can create custom pool and allocate memory out of it.
17472It can be useful if you want to:
17473
17474- Keep certain kind of allocations separate from others.
17475- Enforce particular, fixed size of Vulkan memory blocks.
17476- Limit maximum amount of Vulkan memory allocated for that pool.
17477- Reserve minimum or fixed amount of Vulkan memory always preallocated for that pool.
17478- Use extra parameters for a set of your allocations that are available in #VmaPoolCreateInfo but not in
17479 #VmaAllocationCreateInfo - e.g., custom minimum alignment, custom `pNext` chain.
17480- Perform defragmentation on a specific subset of your allocations.
17481
17482To use custom memory pools:
17483
17484-# Fill VmaPoolCreateInfo structure.
17485-# Call vmaCreatePool() to obtain #VmaPool handle.
17486-# When making an allocation, set VmaAllocationCreateInfo::pool to this handle.
17487 You don't need to specify any other parameters of this structure, like `usage`.
17488
17489Example:
17490
17491\code
17492// Find memoryTypeIndex for the pool.
17493VkBufferCreateInfo sampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17494sampleBufCreateInfo.size = 0x10000; // Doesn't matter.
17495sampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
17496
17497VmaAllocationCreateInfo sampleAllocCreateInfo = {};
17498sampleAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
17499
17500uint32_t memTypeIndex;
17501VkResult res = vmaFindMemoryTypeIndexForBufferInfo(allocator,
17502 &sampleBufCreateInfo, &sampleAllocCreateInfo, &memTypeIndex);
17503// Check res...
17504
17505// Create a pool that can have at most 2 blocks, 128 MiB each.
17506VmaPoolCreateInfo poolCreateInfo = {};
17507poolCreateInfo.memoryTypeIndex = memTypeIndex;
17508poolCreateInfo.blockSize = 128ull * 1024 * 1024;
17509poolCreateInfo.maxBlockCount = 2;
17510
17511VmaPool pool;
17512res = vmaCreatePool(allocator, &poolCreateInfo, &pool);
17513// Check res...
17514
17515// Allocate a buffer out of it.
17516VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17517bufCreateInfo.size = 1024;
17518bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
17519
17520VmaAllocationCreateInfo allocCreateInfo = {};
17521allocCreateInfo.pool = pool;
17522
17523VkBuffer buf;
17524VmaAllocation alloc;
17525res = vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, nullptr);
17526// Check res...
17527\endcode
17528
17529You have to free all allocations made from this pool before destroying it.
17530
17531\code
17532vmaDestroyBuffer(allocator, buf, alloc);
17533vmaDestroyPool(allocator, pool);
17534\endcode
17535
17536New versions of this library support creating dedicated allocations in custom pools.
17537It is supported only when VmaPoolCreateInfo::blockSize = 0.
17538To use this feature, set VmaAllocationCreateInfo::pool to the pointer to your custom pool and
17539VmaAllocationCreateInfo::flags to #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
17540
17541
17542\section custom_memory_pools_MemTypeIndex Choosing memory type index
17543
17544When creating a pool, you must explicitly specify memory type index.
17545To find the one suitable for your buffers or images, you can use helper functions
17546vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo().
17547You need to provide structures with example parameters of buffers or images
17548that you are going to create in that pool.
17549
17550\code
17551VkBufferCreateInfo exampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17552exampleBufCreateInfo.size = 1024; // Doesn't matter
17553exampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
17554
17555VmaAllocationCreateInfo allocCreateInfo = {};
17556allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
17557
17558uint32_t memTypeIndex;
17559vmaFindMemoryTypeIndexForBufferInfo(allocator, &exampleBufCreateInfo, &allocCreateInfo, &memTypeIndex);
17560
17561VmaPoolCreateInfo poolCreateInfo = {};
17562poolCreateInfo.memoryTypeIndex = memTypeIndex;
17563// ...
17564\endcode
17565
17566When creating buffers/images allocated in that pool, provide following parameters:
17567
17568- `VkBufferCreateInfo`: Prefer to pass same parameters as above.
17569 Otherwise you risk creating resources in a memory type that is not suitable for them, which may result in undefined behavior.
17570 Using different `VK_BUFFER_USAGE_` flags may work, but you shouldn't create images in a pool intended for buffers
17571 or the other way around.
17572- VmaAllocationCreateInfo: You don't need to pass same parameters. Fill only `pool` member.
17573 Other members are ignored anyway.
17574
17575
17576\section custom_memory_pools_when_not_use When not to use custom pools
17577
17578Custom pools are commonly overused by VMA users.
17579While it may feel natural to keep some logical groups of resources separate in memory,
17580in most cases it does more harm than good.
17581Using custom pool shouldn't be your first choice.
17582Instead, please make all allocations from default pools first and only use custom pools
17583if you can prove and measure that it is beneficial in some way,
17584e.g. it results in lower memory usage, better performance, etc.
17585
17586Using custom pools has disadvantages:
17587
17588- Each pool has its own collection of `VkDeviceMemory` blocks.
17589 Some of them may be partially or even completely empty.
17590 Spreading allocations across multiple pools increases the amount of wasted (allocated but unbound) memory.
17591- You must manually choose specific memory type to be used by a custom pool (set as VmaPoolCreateInfo::memoryTypeIndex).
17592 When using default pools, best memory type for each of your allocations can be selected automatically
17593 using a carefully design algorithm that works across all kinds of GPUs.
17594- If an allocation from a custom pool at specific memory type fails, entire allocation operation returns failure.
17595 When using default pools, VMA tries another compatible memory type.
17596- If you set VmaPoolCreateInfo::blockSize != 0, each memory block has the same size,
17597 while default pools start from small blocks and only allocate next blocks larger and larger
17598 up to the preferred block size.
17599
17600Many of the common concerns can be addressed in a different way than using custom pools:
17601
17602- If you want to keep your allocations of certain size (small versus large) or certain lifetime (transient versus long lived)
17603 separate, you likely don't need to.
17604 VMA uses a high quality allocation algorithm that manages memory well in various cases.
17605 Please measure and check if using custom pools provides a benefit.
17606- If you want to keep your images and buffers separate, you don't need to.
17607 VMA respects `bufferImageGranularity` limit automatically.
17608- If you want to keep your mapped and not mapped allocations separate, you don't need to.
17609 VMA respects `nonCoherentAtomSize` limit automatically.
17610 It also maps only those `VkDeviceMemory` blocks that need to map any allocation.
17611 It even tries to keep mappable and non-mappable allocations in separate blocks to minimize the amount of mapped memory.
17612- If you want to choose a custom size for the default memory block, you can set it globally instead
17613 using VmaAllocatorCreateInfo::preferredLargeHeapBlockSize.
17614- If you want to select specific memory type for your allocation,
17615 you can set VmaAllocationCreateInfo::memoryTypeBits to `(1u << myMemoryTypeIndex)` instead.
17616- If you need to create a buffer with certain minimum alignment, you can still do it
17617 using default pools with dedicated function vmaCreateBufferWithAlignment().
17618
17619
17620\section linear_algorithm Linear allocation algorithm
17621
17622Each Vulkan memory block managed by this library has accompanying metadata that
17623keeps track of used and unused regions. By default, the metadata structure and
17624algorithm tries to find best place for new allocations among free regions to
17625optimize memory usage. This way you can allocate and free objects in any order.
17626
17627![Default allocation algorithm](../gfx/Linear_allocator_1_algo_default.png)
17628
17629Sometimes there is a need to use simpler, linear allocation algorithm. You can
17630create custom pool that uses such algorithm by adding flag
17631#VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT to VmaPoolCreateInfo::flags while creating
17632#VmaPool object. Then an alternative metadata management is used. It always
17633creates new allocations after last one and doesn't reuse free regions after
17634allocations freed in the middle. It results in better allocation performance and
17635less memory consumed by metadata.
17636
17637![Linear allocation algorithm](../gfx/Linear_allocator_2_algo_linear.png)
17638
17639With this one flag, you can create a custom pool that can be used in many ways:
17640free-at-once, stack, double stack, and ring buffer. See below for details.
17641You don't need to specify explicitly which of these options you are going to use - it is detected automatically.
17642
17643\subsection linear_algorithm_free_at_once Free-at-once
17644
17645In a pool that uses linear algorithm, you still need to free all the allocations
17646individually, e.g. by using vmaFreeMemory() or vmaDestroyBuffer(). You can free
17647them in any order. New allocations are always made after last one - free space
17648in the middle is not reused. However, when you release all the allocation and
17649the pool becomes empty, allocation starts from the beginning again. This way you
17650can use linear algorithm to speed up creation of allocations that you are going
17651to release all at once.
17652
17653![Free-at-once](../gfx/Linear_allocator_3_free_at_once.png)
17654
17655This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
17656value that allows multiple memory blocks.
17657
17658\subsection linear_algorithm_stack Stack
17659
17660When you free an allocation that was created last, its space can be reused.
17661Thanks to this, if you always release allocations in the order opposite to their
17662creation (LIFO - Last In First Out), you can achieve behavior of a stack.
17663
17664![Stack](../gfx/Linear_allocator_4_stack.png)
17665
17666This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
17667value that allows multiple memory blocks.
17668
17669\subsection linear_algorithm_double_stack Double stack
17670
17671The space reserved by a custom pool with linear algorithm may be used by two
17672stacks:
17673
17674- First, default one, growing up from offset 0.
17675- Second, "upper" one, growing down from the end towards lower offsets.
17676
17677To make allocation from the upper stack, add flag #VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT
17678to VmaAllocationCreateInfo::flags.
17679
17680![Double stack](../gfx/Linear_allocator_7_double_stack.png)
17681
17682Double stack is available only in pools with one memory block -
17683VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
17684
17685When the two stacks' ends meet so there is not enough space between them for a
17686new allocation, such allocation fails with usual
17687`VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
17688
17689\subsection linear_algorithm_ring_buffer Ring buffer
17690
17691When you free some allocations from the beginning and there is not enough free space
17692for a new one at the end of a pool, allocator's "cursor" wraps around to the
17693beginning and starts allocation there. Thanks to this, if you always release
17694allocations in the same order as you created them (FIFO - First In First Out),
17695you can achieve behavior of a ring buffer / queue.
17696
17697![Ring buffer](../gfx/Linear_allocator_5_ring_buffer.png)
17698
17699Ring buffer is available only in pools with one memory block -
17700VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
17701
17702\note \ref defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
17703
17704
17705\page defragmentation Defragmentation
17706
17707Interleaved allocations and deallocations of many objects of varying size can
17708cause fragmentation over time, which can lead to a situation where the library is unable
17709to find a continuous range of free memory for a new allocation despite there is
17710enough free space, just scattered across many small free ranges between existing
17711allocations.
17712
17713To mitigate this problem, you can use defragmentation feature.
17714It doesn't happen automatically though and needs your cooperation,
17715because VMA is a low level library that only allocates memory.
17716It cannot recreate buffers and images in a new place as it doesn't remember the contents of `VkBufferCreateInfo` / `VkImageCreateInfo` structures.
17717It cannot copy their contents as it doesn't record any commands to a command buffer.
17718
17719Example:
17720
17721\code
17722VmaDefragmentationInfo defragInfo = {};
17723defragInfo.pool = myPool;
17724defragInfo.flags = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT;
17725
17726VmaDefragmentationContext defragCtx;
17727VkResult res = vmaBeginDefragmentation(allocator, &defragInfo, &defragCtx);
17728// Check res...
17729
17730for(;;)
17731{
17732 VmaDefragmentationPassMoveInfo pass;
17733 res = vmaBeginDefragmentationPass(allocator, defragCtx, &pass);
17734 if(res == VK_SUCCESS)
17735 break;
17736 else if(res != VK_INCOMPLETE)
17737 // Handle error...
17738
17739 for(uint32_t i = 0; i < pass.moveCount; ++i)
17740 {
17741 // Inspect pass.pMoves[i].srcAllocation, identify what buffer/image it represents.
17742 VmaAllocationInfo allocInfo;
17743 vmaGetAllocationInfo(allocator, pass.pMoves[i].srcAllocation, &allocInfo);
17744 MyEngineResourceData* resData = (MyEngineResourceData*)allocInfo.pUserData;
17745
17746 // Recreate and bind this buffer/image at: pass.pMoves[i].dstMemory, pass.pMoves[i].dstOffset.
17747 VkImageCreateInfo imgCreateInfo = ...
17748 VkImage newImg;
17749 res = vkCreateImage(device, &imgCreateInfo, nullptr, &newImg);
17750 // Check res...
17751 res = vmaBindImageMemory(allocator, pass.pMoves[i].dstTmpAllocation, newImg);
17752 // Check res...
17753
17754 // Issue a vkCmdCopyBuffer/vkCmdCopyImage to copy its content to the new place.
17755 vkCmdCopyImage(cmdBuf, resData->img, ..., newImg, ...);
17756 }
17757
17758 // Make sure the copy commands finished executing.
17759 vkWaitForFences(...);
17760
17761 // Destroy old buffers/images bound with pass.pMoves[i].srcAllocation.
17762 for(uint32_t i = 0; i < pass.moveCount; ++i)
17763 {
17764 // ...
17765 vkDestroyImage(device, resData->img, nullptr);
17766 }
17767
17768 // Update appropriate descriptors to point to the new places...
17769
17770 res = vmaEndDefragmentationPass(allocator, defragCtx, &pass);
17771 if(res == VK_SUCCESS)
17772 break;
17773 else if(res != VK_INCOMPLETE)
17774 // Handle error...
17775}
17776
17777vmaEndDefragmentation(allocator, defragCtx, nullptr);
17778\endcode
17779
17780Although functions like vmaCreateBuffer(), vmaCreateImage(), vmaDestroyBuffer(), vmaDestroyImage()
17781create/destroy an allocation and a buffer/image at once, these are just a shortcut for
17782creating the resource, allocating memory, and binding them together.
17783Defragmentation works on memory allocations only. You must handle the rest manually.
17784Defragmentation is an iterative process that should repreat "passes" as long as related functions
17785return `VK_INCOMPLETE` not `VK_SUCCESS`.
17786In each pass:
17787
177881. vmaBeginDefragmentationPass() function call:
17789 - Calculates and returns the list of allocations to be moved in this pass.
17790 Note this can be a time-consuming process.
17791 - Reserves destination memory for them by creating temporary destination allocations
17792 that you can query for their `VkDeviceMemory` + offset using vmaGetAllocationInfo().
177932. Inside the pass, **you should**:
17794 - Inspect the returned list of allocations to be moved.
17795 - Create new buffers/images and bind them at the returned destination temporary allocations.
17796 - Copy data from source to destination resources if necessary.
17797 - Destroy the source buffers/images, but NOT their allocations.
177983. vmaEndDefragmentationPass() function call:
17799 - Frees the source memory reserved for the allocations that are moved.
17800 - Modifies source #VmaAllocation objects that are moved to point to the destination reserved memory.
17801 - Frees `VkDeviceMemory` blocks that became empty.
17802
17803Unlike in previous iterations of the defragmentation API, there is no list of "movable" allocations passed as a parameter.
17804Defragmentation algorithm tries to move all suitable allocations.
17805You can, however, refuse to move some of them inside a defragmentation pass, by setting
17806`pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
17807This is not recommended and may result in suboptimal packing of the allocations after defragmentation.
17808If you cannot ensure any allocation can be moved, it is better to keep movable allocations separate in a custom pool.
17809
17810Inside a pass, for each allocation that should be moved:
17811
17812- You should copy its data from the source to the destination place by calling e.g. `vkCmdCopyBuffer()`, `vkCmdCopyImage()`.
17813 - You need to make sure these commands finished executing before destroying the source buffers/images and before calling vmaEndDefragmentationPass().
17814- If a resource doesn't contain any meaningful data, e.g. it is a transient color attachment image to be cleared,
17815 filled, and used temporarily in each rendering frame, you can just recreate this image
17816 without copying its data.
17817- If the resource is in `HOST_VISIBLE` and `HOST_CACHED` memory, you can copy its data on the CPU
17818 using `memcpy()`.
17819- If you cannot move the allocation, you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
17820 This will cancel the move.
17821 - vmaEndDefragmentationPass() will then free the destination memory
17822 not the source memory of the allocation, leaving it unchanged.
17823- If you decide the allocation is unimportant and can be destroyed instead of moved (e.g. it wasn't used for long time),
17824 you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
17825 - vmaEndDefragmentationPass() will then free both source and destination memory, and will destroy the source #VmaAllocation object.
17826
17827You can defragment a specific custom pool by setting VmaDefragmentationInfo::pool
17828(like in the example above) or all the default pools by setting this member to null.
17829
17830Defragmentation is always performed in each pool separately.
17831Allocations are never moved between different Vulkan memory types.
17832The size of the destination memory reserved for a moved allocation is the same as the original one.
17833Alignment of an allocation as it was determined using `vkGetBufferMemoryRequirements()` etc. is also respected after defragmentation.
17834Buffers/images should be recreated with the same `VkBufferCreateInfo` / `VkImageCreateInfo` parameters as the original ones.
17835
17836You can perform the defragmentation incrementally to limit the number of allocations and bytes to be moved
17837in each pass, e.g. to call it in sync with render frames and not to experience too big hitches.
17838See members: VmaDefragmentationInfo::maxBytesPerPass, VmaDefragmentationInfo::maxAllocationsPerPass.
17839
17840It is also safe to perform the defragmentation asynchronously to render frames and other Vulkan and VMA
17841usage, possibly from multiple threads, with the exception that allocations
17842returned in VmaDefragmentationPassMoveInfo::pMoves shouldn't be destroyed until the defragmentation pass is ended.
17843
17844<b>Mapping</b> is preserved on allocations that are moved during defragmentation.
17845Whether through #VMA_ALLOCATION_CREATE_MAPPED_BIT or vmaMapMemory(), the allocations
17846are mapped at their new place. Of course, pointer to the mapped data changes, so it needs to be queried
17847using VmaAllocationInfo::pMappedData.
17848
17849\note Defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
17850
17851
17852\page statistics Statistics
17853
17854This library contains several functions that return information about its internal state,
17855especially the amount of memory allocated from Vulkan.
17856
17857\section statistics_numeric_statistics Numeric statistics
17858
17859If you need to obtain basic statistics about memory usage per heap, together with current budget,
17860you can call function vmaGetHeapBudgets() and inspect structure #VmaBudget.
17861This is useful to keep track of memory usage and stay within budget
17862(see also \ref staying_within_budget).
17863Example:
17864
17865\code
17866uint32_t heapIndex = ...
17867
17868VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
17869vmaGetHeapBudgets(allocator, budgets);
17870
17871printf("My heap currently has %u allocations taking %llu B,\n",
17872 budgets[heapIndex].statistics.allocationCount,
17873 budgets[heapIndex].statistics.allocationBytes);
17874printf("allocated out of %u Vulkan device memory blocks taking %llu B,\n",
17875 budgets[heapIndex].statistics.blockCount,
17876 budgets[heapIndex].statistics.blockBytes);
17877printf("Vulkan reports total usage %llu B with budget %llu B.\n",
17878 budgets[heapIndex].usage,
17879 budgets[heapIndex].budget);
17880\endcode
17881
17882You can query for more detailed statistics per memory heap, type, and totals,
17883including minimum and maximum allocation size and unused range size,
17884by calling function vmaCalculateStatistics() and inspecting structure #VmaTotalStatistics.
17885This function is slower though, as it has to traverse all the internal data structures,
17886so it should be used only for debugging purposes.
17887
17888You can query for statistics of a custom pool using function vmaGetPoolStatistics()
17889or vmaCalculatePoolStatistics().
17890
17891You can query for information about a specific allocation using function vmaGetAllocationInfo().
17892It fill structure #VmaAllocationInfo.
17893
17894\section statistics_json_dump JSON dump
17895
17896You can dump internal state of the allocator to a string in JSON format using function vmaBuildStatsString().
17897The result is guaranteed to be correct JSON.
17898It uses ANSI encoding.
17899Any strings provided by user (see [Allocation names](@ref allocation_names))
17900are copied as-is and properly escaped for JSON, so if they use UTF-8, ISO-8859-2 or any other encoding,
17901this JSON string can be treated as using this encoding.
17902It must be freed using function vmaFreeStatsString().
17903
17904The format of this JSON string is not part of official documentation of the library,
17905but it will not change in backward-incompatible way without increasing library major version number
17906and appropriate mention in changelog.
17907
17908The JSON string contains all the data that can be obtained using vmaCalculateStatistics().
17909It can also contain detailed map of allocated memory blocks and their regions -
17910free and occupied by allocations.
17911This allows e.g. to visualize the memory or assess fragmentation.
17912
17913
17914\page allocation_annotation Allocation names and user data
17915
17916\section allocation_user_data Allocation user data
17917
17918You can annotate allocations with your own information, e.g. for debugging purposes.
17919To do that, fill VmaAllocationCreateInfo::pUserData field when creating
17920an allocation. It is an opaque `void*` pointer. You can use it e.g. as a pointer,
17921some handle, index, key, ordinal number or any other value that would associate
17922the allocation with your custom metadata.
17923It is useful to identify appropriate data structures in your engine given #VmaAllocation,
17924e.g. when doing \ref defragmentation.
17925
17926\code
17927VkBufferCreateInfo bufCreateInfo = ...
17928
17929MyBufferMetadata* pMetadata = CreateBufferMetadata();
17930
17931VmaAllocationCreateInfo allocCreateInfo = {};
17932allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
17933allocCreateInfo.pUserData = pMetadata;
17934
17935VkBuffer buffer;
17936VmaAllocation allocation;
17937vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buffer, &allocation, nullptr);
17938\endcode
17939
17940The pointer may be later retrieved as VmaAllocationInfo::pUserData:
17941
17942\code
17943VmaAllocationInfo allocInfo;
17944vmaGetAllocationInfo(allocator, allocation, &allocInfo);
17945MyBufferMetadata* pMetadata = (MyBufferMetadata*)allocInfo.pUserData;
17946\endcode
17947
17948It can also be changed using function vmaSetAllocationUserData().
17949
17950Values of (non-zero) allocations' `pUserData` are printed in JSON report created by
17951vmaBuildStatsString() in hexadecimal form.
17952
17953\section allocation_names Allocation names
17954
17955An allocation can also carry a null-terminated string, giving a name to the allocation.
17956To set it, call vmaSetAllocationName().
17957The library creates internal copy of the string, so the pointer you pass doesn't need
17958to be valid for whole lifetime of the allocation. You can free it after the call.
17959
17960\code
17961std::string imageName = "Texture: ";
17962imageName += fileName;
17963vmaSetAllocationName(allocator, allocation, imageName.c_str());
17964\endcode
17965
17966The string can be later retrieved by inspecting VmaAllocationInfo::pName.
17967It is also printed in JSON report created by vmaBuildStatsString().
17968
17969\note Setting string name to VMA allocation doesn't automatically set it to the Vulkan buffer or image created with it.
17970You must do it manually using an extension like VK_EXT_debug_utils, which is independent of this library.
17971
17972
17973\page virtual_allocator Virtual allocator
17974
17975As an extra feature, the core allocation algorithm of the library is exposed through a simple and convenient API of "virtual allocator".
17976It doesn't allocate any real GPU memory. It just keeps track of used and free regions of a "virtual block".
17977You can use it to allocate your own memory or other objects, even completely unrelated to Vulkan.
17978A common use case is sub-allocation of pieces of one large GPU buffer.
17979
17980\section virtual_allocator_creating_virtual_block Creating virtual block
17981
17982To use this functionality, there is no main "allocator" object.
17983You don't need to have #VmaAllocator object created.
17984All you need to do is to create a separate #VmaVirtualBlock object for each block of memory you want to be managed by the allocator:
17985
17986-# Fill in #VmaVirtualBlockCreateInfo structure.
17987-# Call vmaCreateVirtualBlock(). Get new #VmaVirtualBlock object.
17988
17989Example:
17990
17991\code
17992VmaVirtualBlockCreateInfo blockCreateInfo = {};
17993blockCreateInfo.size = 1048576; // 1 MB
17994
17995VmaVirtualBlock block;
17996VkResult res = vmaCreateVirtualBlock(&blockCreateInfo, &block);
17997\endcode
17998
17999\section virtual_allocator_making_virtual_allocations Making virtual allocations
18000
18001#VmaVirtualBlock object contains internal data structure that keeps track of free and occupied regions
18002using the same code as the main Vulkan memory allocator.
18003Similarly to #VmaAllocation for standard GPU allocations, there is #VmaVirtualAllocation type
18004that represents an opaque handle to an allocation within the virtual block.
18005
18006In order to make such allocation:
18007
18008-# Fill in #VmaVirtualAllocationCreateInfo structure.
18009-# Call vmaVirtualAllocate(). Get new #VmaVirtualAllocation object that represents the allocation.
18010 You can also receive `VkDeviceSize offset` that was assigned to the allocation.
18011
18012Example:
18013
18014\code
18015VmaVirtualAllocationCreateInfo allocCreateInfo = {};
18016allocCreateInfo.size = 4096; // 4 KB
18017
18018VmaVirtualAllocation alloc;
18019VkDeviceSize offset;
18020res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, &offset);
18021if(res == VK_SUCCESS)
18022{
18023 // Use the 4 KB of your memory starting at offset.
18024}
18025else
18026{
18027 // Allocation failed - no space for it could be found. Handle this error!
18028}
18029\endcode
18030
18031\section virtual_allocator_deallocation Deallocation
18032
18033When no longer needed, an allocation can be freed by calling vmaVirtualFree().
18034You can only pass to this function an allocation that was previously returned by vmaVirtualAllocate()
18035called for the same #VmaVirtualBlock.
18036
18037When whole block is no longer needed, the block object can be released by calling vmaDestroyVirtualBlock().
18038All allocations must be freed before the block is destroyed, which is checked internally by an assert.
18039However, if you don't want to call vmaVirtualFree() for each allocation, you can use vmaClearVirtualBlock() to free them all at once -
18040a feature not available in normal Vulkan memory allocator. Example:
18041
18042\code
18043vmaVirtualFree(block, alloc);
18044vmaDestroyVirtualBlock(block);
18045\endcode
18046
18047\section virtual_allocator_allocation_parameters Allocation parameters
18048
18049You can attach a custom pointer to each allocation by using vmaSetVirtualAllocationUserData().
18050Its default value is null.
18051It can be used to store any data that needs to be associated with that allocation - e.g. an index, a handle, or a pointer to some
18052larger data structure containing more information. Example:
18053
18054\code
18055struct CustomAllocData
18056{
18057 std::string m_AllocName;
18058};
18059CustomAllocData* allocData = new CustomAllocData();
18060allocData->m_AllocName = "My allocation 1";
18061vmaSetVirtualAllocationUserData(block, alloc, allocData);
18062\endcode
18063
18064The pointer can later be fetched, along with allocation offset and size, by passing the allocation handle to function
18065vmaGetVirtualAllocationInfo() and inspecting returned structure #VmaVirtualAllocationInfo.
18066If you allocated a new object to be used as the custom pointer, don't forget to delete that object before freeing the allocation!
18067Example:
18068
18069\code
18070VmaVirtualAllocationInfo allocInfo;
18071vmaGetVirtualAllocationInfo(block, alloc, &allocInfo);
18072delete (CustomAllocData*)allocInfo.pUserData;
18073
18074vmaVirtualFree(block, alloc);
18075\endcode
18076
18077\section virtual_allocator_alignment_and_units Alignment and units
18078
18079It feels natural to express sizes and offsets in bytes.
18080If an offset of an allocation needs to be aligned to a multiply of some number (e.g. 4 bytes), you can fill optional member
18081VmaVirtualAllocationCreateInfo::alignment to request it. Example:
18082
18083\code
18084VmaVirtualAllocationCreateInfo allocCreateInfo = {};
18085allocCreateInfo.size = 4096; // 4 KB
18086allocCreateInfo.alignment = 4; // Returned offset must be a multiply of 4 B
18087
18088VmaVirtualAllocation alloc;
18089res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, nullptr);
18090\endcode
18091
18092Alignments of different allocations made from one block may vary.
18093However, if all alignments and sizes are always multiply of some size e.g. 4 B or `sizeof(MyDataStruct)`,
18094you can express all sizes, alignments, and offsets in multiples of that size instead of individual bytes.
18095It might be more convenient, but you need to make sure to use this new unit consistently in all the places:
18096
18097- VmaVirtualBlockCreateInfo::size
18098- VmaVirtualAllocationCreateInfo::size and VmaVirtualAllocationCreateInfo::alignment
18099- Using offset returned by vmaVirtualAllocate() or in VmaVirtualAllocationInfo::offset
18100
18101\section virtual_allocator_statistics Statistics
18102
18103You can obtain statistics of a virtual block using vmaGetVirtualBlockStatistics()
18104(to get brief statistics that are fast to calculate)
18105or vmaCalculateVirtualBlockStatistics() (to get more detailed statistics, slower to calculate).
18106The functions fill structures #VmaStatistics, #VmaDetailedStatistics respectively - same as used by the normal Vulkan memory allocator.
18107Example:
18108
18109\code
18110VmaStatistics stats;
18111vmaGetVirtualBlockStatistics(block, &stats);
18112printf("My virtual block has %llu bytes used by %u virtual allocations\n",
18113 stats.allocationBytes, stats.allocationCount);
18114\endcode
18115
18116You can also request a full list of allocations and free regions as a string in JSON format by calling
18117vmaBuildVirtualBlockStatsString().
18118Returned string must be later freed using vmaFreeVirtualBlockStatsString().
18119The format of this string differs from the one returned by the main Vulkan allocator, but it is similar.
18120
18121\section virtual_allocator_additional_considerations Additional considerations
18122
18123The "virtual allocator" functionality is implemented on a level of individual memory blocks.
18124Keeping track of a whole collection of blocks, allocating new ones when out of free space,
18125deleting empty ones, and deciding which one to try first for a new allocation must be implemented by the user.
18126
18127Alternative allocation algorithms are supported, just like in custom pools of the real GPU memory.
18128See enum #VmaVirtualBlockCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT).
18129You can find their description in chapter \ref custom_memory_pools.
18130Allocation strategies are also supported.
18131See enum #VmaVirtualAllocationCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT).
18132
18133Following features are supported only by the allocator of the real GPU memory and not by virtual allocations:
18134buffer-image granularity, `VMA_DEBUG_MARGIN`, `VMA_MIN_ALIGNMENT`.
18135
18136
18137\page debugging_memory_usage Debugging incorrect memory usage
18138
18139If you suspect a bug with memory usage, like usage of uninitialized memory or
18140memory being overwritten out of bounds of an allocation,
18141you can use debug features of this library to verify this.
18142
18143\section debugging_memory_usage_initialization Memory initialization
18144
18145If you experience a bug with incorrect and nondeterministic data in your program and you suspect uninitialized memory to be used,
18146you can enable automatic memory initialization to verify this.
18147To do it, define macro `VMA_DEBUG_INITIALIZE_ALLOCATIONS` to 1.
18148
18149\code
18150#define VMA_DEBUG_INITIALIZE_ALLOCATIONS 1
18151#include "vk_mem_alloc.h"
18152\endcode
18153
18154It makes memory of new allocations initialized to bit pattern `0xDCDCDCDC`.
18155Before an allocation is destroyed, its memory is filled with bit pattern `0xEFEFEFEF`.
18156Memory is automatically mapped and unmapped if necessary.
18157
18158If you find these values while debugging your program, good chances are that you incorrectly
18159read Vulkan memory that is allocated but not initialized, or already freed, respectively.
18160
18161Memory initialization works only with memory types that are `HOST_VISIBLE` and with allocations that can be mapped.
18162It works also with dedicated allocations.
18163
18164\section debugging_memory_usage_margins Margins
18165
18166By default, allocations are laid out in memory blocks next to each other if possible
18167(considering required alignment, `bufferImageGranularity`, and `nonCoherentAtomSize`).
18168
18169![Allocations without margin](../gfx/Margins_1.png)
18170
18171Define macro `VMA_DEBUG_MARGIN` to some non-zero value (e.g. 16) to enforce specified
18172number of bytes as a margin after every allocation.
18173
18174\code
18175#define VMA_DEBUG_MARGIN 16
18176#include "vk_mem_alloc.h"
18177\endcode
18178
18179![Allocations with margin](../gfx/Margins_2.png)
18180
18181If your bug goes away after enabling margins, it means it may be caused by memory
18182being overwritten outside of allocation boundaries. It is not 100% certain though.
18183Change in application behavior may also be caused by different order and distribution
18184of allocations across memory blocks after margins are applied.
18185
18186Margins work with all types of memory.
18187
18188Margin is applied only to allocations made out of memory blocks and not to dedicated
18189allocations, which have their own memory block of specific size.
18190It is thus not applied to allocations made using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag
18191or those automatically decided to put into dedicated allocations, e.g. due to its
18192large size or recommended by VK_KHR_dedicated_allocation extension.
18193
18194Margins appear in [JSON dump](@ref statistics_json_dump) as part of free space.
18195
18196Note that enabling margins increases memory usage and fragmentation.
18197
18198Margins do not apply to \ref virtual_allocator.
18199
18200\section debugging_memory_usage_corruption_detection Corruption detection
18201
18202You can additionally define macro `VMA_DEBUG_DETECT_CORRUPTION` to 1 to enable validation
18203of contents of the margins.
18204
18205\code
18206#define VMA_DEBUG_MARGIN 16
18207#define VMA_DEBUG_DETECT_CORRUPTION 1
18208#include "vk_mem_alloc.h"
18209\endcode
18210
18211When this feature is enabled, number of bytes specified as `VMA_DEBUG_MARGIN`
18212(it must be multiply of 4) after every allocation is filled with a magic number.
18213This idea is also know as "canary".
18214Memory is automatically mapped and unmapped if necessary.
18215
18216This number is validated automatically when the allocation is destroyed.
18217If it is not equal to the expected value, `VMA_ASSERT()` is executed.
18218It clearly means that either CPU or GPU overwritten the memory outside of boundaries of the allocation,
18219which indicates a serious bug.
18220
18221You can also explicitly request checking margins of all allocations in all memory blocks
18222that belong to specified memory types by using function vmaCheckCorruption(),
18223or in memory blocks that belong to specified custom pool, by using function
18224vmaCheckPoolCorruption().
18225
18226Margin validation (corruption detection) works only for memory types that are
18227`HOST_VISIBLE` and `HOST_COHERENT`.
18228
18229
18230\section debugging_memory_usage_leak_detection Leak detection features
18231
18232At allocation and allocator destruction time VMA checks for unfreed and unmapped blocks using
18233`VMA_ASSERT_LEAK()`. This macro defaults to an assertion, triggering a typically fatal error in Debug
18234builds, and doing nothing in Release builds. You can provide your own definition of `VMA_ASSERT_LEAK()`
18235to change this behavior.
18236
18237At memory block destruction time VMA lists out all unfreed allocations using the `VMA_LEAK_LOG_FORMAT()`
18238macro, which defaults to `VMA_DEBUG_LOG_FORMAT`, which in turn defaults to a no-op.
18239If you're having trouble with leaks - for example, the aforementioned assertion triggers, but you don't
18240quite know \em why -, overriding this macro to print out the the leaking blocks, combined with assigning
18241individual names to allocations using vmaSetAllocationName(), can greatly aid in fixing them.
18242
18243\page other_api_interop Interop with other graphics APIs
18244
18245VMA provides some features that help with interoperability with other graphics APIs, e.g. OpenGL.
18246
18247\section opengl_interop_exporting_memory Exporting memory
18248
18249If you want to attach `VkExportMemoryAllocateInfoKHR` or other structure to `pNext` chain of memory allocations made by the library:
18250
18251You can create \ref custom_memory_pools for such allocations.
18252Define and fill in your `VkExportMemoryAllocateInfoKHR` structure and attach it to VmaPoolCreateInfo::pMemoryAllocateNext
18253while creating the custom pool.
18254Please note that the structure must remain alive and unchanged for the whole lifetime of the #VmaPool,
18255not only while creating it, as no copy of the structure is made,
18256but its original pointer is used for each allocation instead.
18257
18258If you want to export all memory allocated by VMA from certain memory types,
18259also dedicated allocations or other allocations made from default pools,
18260an alternative solution is to fill in VmaAllocatorCreateInfo::pTypeExternalMemoryHandleTypes.
18261It should point to an array with `VkExternalMemoryHandleTypeFlagsKHR` to be automatically passed by the library
18262through `VkExportMemoryAllocateInfoKHR` on each allocation made from a specific memory type.
18263Please note that new versions of the library also support dedicated allocations created in custom pools.
18264
18265You should not mix these two methods in a way that allows to apply both to the same memory type.
18266Otherwise, `VkExportMemoryAllocateInfoKHR` structure would be attached twice to the `pNext` chain of `VkMemoryAllocateInfo`.
18267
18268
18269\section opengl_interop_custom_alignment Custom alignment
18270
18271Buffers or images exported to a different API like OpenGL may require a different alignment,
18272higher than the one used by the library automatically, queried from functions like `vkGetBufferMemoryRequirements`.
18273To impose such alignment:
18274
18275You can create \ref custom_memory_pools for such allocations.
18276Set VmaPoolCreateInfo::minAllocationAlignment member to the minimum alignment required for each allocation
18277to be made out of this pool.
18278The alignment actually used will be the maximum of this member and the alignment returned for the specific buffer or image
18279from a function like `vkGetBufferMemoryRequirements`, which is called by VMA automatically.
18280
18281If you want to create a buffer with a specific minimum alignment out of default pools,
18282use special function vmaCreateBufferWithAlignment(), which takes additional parameter `minAlignment`.
18283
18284Note the problem of alignment affects only resources placed inside bigger `VkDeviceMemory` blocks and not dedicated
18285allocations, as these, by definition, always have alignment = 0 because the resource is bound to the beginning of its dedicated block.
18286You can ensure that an allocation is created as dedicated by using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
18287Contrary to Direct3D 12, Vulkan doesn't have a concept of alignment of the entire memory block passed on its allocation.
18288
18289\section opengl_interop_extended_allocation_information Extended allocation information
18290
18291If you want to rely on VMA to allocate your buffers and images inside larger memory blocks,
18292but you need to know the size of the entire block and whether the allocation was made
18293with its own dedicated memory, use function vmaGetAllocationInfo2() to retrieve
18294extended allocation information in structure #VmaAllocationInfo2.
18295
18296
18297
18298\page usage_patterns Recommended usage patterns
18299
18300Vulkan gives great flexibility in memory allocation.
18301This chapter shows the most common patterns.
18302
18303See also slides from talk:
18304[Sawicki, Adam. Advanced Graphics Techniques Tutorial: Memory management in Vulkan and DX12. Game Developers Conference, 2018](https://www.gdcvault.com/play/1025458/Advanced-Graphics-Techniques-Tutorial-New)
18305
18306
18307\section usage_patterns_gpu_only GPU-only resource
18308
18309<b>When:</b>
18310Any resources that you frequently write and read on GPU,
18311e.g. images used as color attachments (aka "render targets"), depth-stencil attachments,
18312images/buffers used as storage image/buffer (aka "Unordered Access View (UAV)").
18313
18314<b>What to do:</b>
18315Let the library select the optimal memory type, which will likely have `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
18316
18317\code
18318VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
18319imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
18320imgCreateInfo.extent.width = 3840;
18321imgCreateInfo.extent.height = 2160;
18322imgCreateInfo.extent.depth = 1;
18323imgCreateInfo.mipLevels = 1;
18324imgCreateInfo.arrayLayers = 1;
18325imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
18326imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
18327imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
18328imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
18329imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
18330
18331VmaAllocationCreateInfo allocCreateInfo = {};
18332allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18333allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
18334allocCreateInfo.priority = 1.0f;
18335
18336VkImage img;
18337VmaAllocation alloc;
18338vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
18339\endcode
18340
18341<b>Also consider:</b>
18342Consider creating them as dedicated allocations using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT,
18343especially if they are large or if you plan to destroy and recreate them with different sizes
18344e.g. when display resolution changes.
18345Prefer to create such resources first and all other GPU resources (like textures and vertex buffers) later.
18346When VK_EXT_memory_priority extension is enabled, it is also worth setting high priority to such allocation
18347to decrease chances to be evicted to system memory by the operating system.
18348
18349\section usage_patterns_staging_copy_upload Staging copy for upload
18350
18351<b>When:</b>
18352A "staging" buffer than you want to map and fill from CPU code, then use as a source of transfer
18353to some GPU resource.
18354
18355<b>What to do:</b>
18356Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT.
18357Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`.
18358
18359\code
18360VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18361bufCreateInfo.size = 65536;
18362bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
18363
18364VmaAllocationCreateInfo allocCreateInfo = {};
18365allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18366allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
18367 VMA_ALLOCATION_CREATE_MAPPED_BIT;
18368
18369VkBuffer buf;
18370VmaAllocation alloc;
18371VmaAllocationInfo allocInfo;
18372vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
18373
18374...
18375
18376memcpy(allocInfo.pMappedData, myData, myDataSize);
18377\endcode
18378
18379<b>Also consider:</b>
18380You can map the allocation using vmaMapMemory() or you can create it as persistenly mapped
18381using #VMA_ALLOCATION_CREATE_MAPPED_BIT, as in the example above.
18382
18383
18384\section usage_patterns_readback Readback
18385
18386<b>When:</b>
18387Buffers for data written by or transferred from the GPU that you want to read back on the CPU,
18388e.g. results of some computations.
18389
18390<b>What to do:</b>
18391Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
18392Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
18393and `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
18394
18395\code
18396VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18397bufCreateInfo.size = 65536;
18398bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_DST_BIT;
18399
18400VmaAllocationCreateInfo allocCreateInfo = {};
18401allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18402allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT |
18403 VMA_ALLOCATION_CREATE_MAPPED_BIT;
18404
18405VkBuffer buf;
18406VmaAllocation alloc;
18407VmaAllocationInfo allocInfo;
18408vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
18409
18410...
18411
18412const float* downloadedData = (const float*)allocInfo.pMappedData;
18413\endcode
18414
18415
18416\section usage_patterns_advanced_data_uploading Advanced data uploading
18417
18418For resources that you frequently write on CPU via mapped pointer and
18419frequently read on GPU e.g. as a uniform buffer (also called "dynamic"), multiple options are possible:
18420
18421-# Easiest solution is to have one copy of the resource in `HOST_VISIBLE` memory,
18422 even if it means system RAM (not `DEVICE_LOCAL`) on systems with a discrete graphics card,
18423 and make the device reach out to that resource directly.
18424 - Reads performed by the device will then go through PCI Express bus.
18425 The performance of this access may be limited, but it may be fine depending on the size
18426 of this resource (whether it is small enough to quickly end up in GPU cache) and the sparsity
18427 of access.
18428-# On systems with unified memory (e.g. AMD APU or Intel integrated graphics, mobile chips),
18429 a memory type may be available that is both `HOST_VISIBLE` (available for mapping) and `DEVICE_LOCAL`
18430 (fast to access from the GPU). Then, it is likely the best choice for such type of resource.
18431-# Systems with a discrete graphics card and separate video memory may or may not expose
18432 a memory type that is both `HOST_VISIBLE` and `DEVICE_LOCAL`, also known as Base Address Register (BAR).
18433 If they do, it represents a piece of VRAM (or entire VRAM, if ReBAR is enabled in the motherboard BIOS)
18434 that is available to CPU for mapping.
18435 - Writes performed by the host to that memory go through PCI Express bus.
18436 The performance of these writes may be limited, but it may be fine, especially on PCIe 4.0,
18437 as long as rules of using uncached and write-combined memory are followed - only sequential writes and no reads.
18438-# Finally, you may need or prefer to create a separate copy of the resource in `DEVICE_LOCAL` memory,
18439 a separate "staging" copy in `HOST_VISIBLE` memory and perform an explicit transfer command between them.
18440
18441Thankfully, VMA offers an aid to create and use such resources in the the way optimal
18442for the current Vulkan device. To help the library make the best choice,
18443use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT together with
18444#VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT.
18445It will then prefer a memory type that is both `DEVICE_LOCAL` and `HOST_VISIBLE` (integrated memory or BAR),
18446but if no such memory type is available or allocation from it fails
18447(PC graphics cards have only 256 MB of BAR by default, unless ReBAR is supported and enabled in BIOS),
18448it will fall back to `DEVICE_LOCAL` memory for fast GPU access.
18449It is then up to you to detect that the allocation ended up in a memory type that is not `HOST_VISIBLE`,
18450so you need to create another "staging" allocation and perform explicit transfers.
18451
18452\code
18453VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18454bufCreateInfo.size = 65536;
18455bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
18456
18457VmaAllocationCreateInfo allocCreateInfo = {};
18458allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18459allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
18460 VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT |
18461 VMA_ALLOCATION_CREATE_MAPPED_BIT;
18462
18463VkBuffer buf;
18464VmaAllocation alloc;
18465VmaAllocationInfo allocInfo;
18466VkResult result = vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
18467// Check result...
18468
18469VkMemoryPropertyFlags memPropFlags;
18470vmaGetAllocationMemoryProperties(allocator, alloc, &memPropFlags);
18471
18472if(memPropFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
18473{
18474 // Allocation ended up in a mappable memory and is already mapped - write to it directly.
18475
18476 // [Executed in runtime]:
18477 memcpy(allocInfo.pMappedData, myData, myDataSize);
18478 result = vmaFlushAllocation(allocator, alloc, 0, VK_WHOLE_SIZE);
18479 // Check result...
18480
18481 VkBufferMemoryBarrier bufMemBarrier = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER };
18482 bufMemBarrier.srcAccessMask = VK_ACCESS_HOST_WRITE_BIT;
18483 bufMemBarrier.dstAccessMask = VK_ACCESS_UNIFORM_READ_BIT;
18484 bufMemBarrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
18485 bufMemBarrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
18486 bufMemBarrier.buffer = buf;
18487 bufMemBarrier.offset = 0;
18488 bufMemBarrier.size = VK_WHOLE_SIZE;
18489
18490 vkCmdPipelineBarrier(cmdBuf, VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT,
18491 0, 0, nullptr, 1, &bufMemBarrier, 0, nullptr);
18492}
18493else
18494{
18495 // Allocation ended up in a non-mappable memory - a transfer using a staging buffer is required.
18496 VkBufferCreateInfo stagingBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18497 stagingBufCreateInfo.size = 65536;
18498 stagingBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
18499
18500 VmaAllocationCreateInfo stagingAllocCreateInfo = {};
18501 stagingAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18502 stagingAllocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
18503 VMA_ALLOCATION_CREATE_MAPPED_BIT;
18504
18505 VkBuffer stagingBuf;
18506 VmaAllocation stagingAlloc;
18507 VmaAllocationInfo stagingAllocInfo;
18508 result = vmaCreateBuffer(allocator, &stagingBufCreateInfo, &stagingAllocCreateInfo,
18509 &stagingBuf, &stagingAlloc, &stagingAllocInfo);
18510 // Check result...
18511
18512 // [Executed in runtime]:
18513 memcpy(stagingAllocInfo.pMappedData, myData, myDataSize);
18514 result = vmaFlushAllocation(allocator, stagingAlloc, 0, VK_WHOLE_SIZE);
18515 // Check result...
18516
18517 VkBufferMemoryBarrier bufMemBarrier = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER };
18518 bufMemBarrier.srcAccessMask = VK_ACCESS_HOST_WRITE_BIT;
18519 bufMemBarrier.dstAccessMask = VK_ACCESS_TRANSFER_READ_BIT;
18520 bufMemBarrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
18521 bufMemBarrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
18522 bufMemBarrier.buffer = stagingBuf;
18523 bufMemBarrier.offset = 0;
18524 bufMemBarrier.size = VK_WHOLE_SIZE;
18525
18526 vkCmdPipelineBarrier(cmdBuf, VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT,
18527 0, 0, nullptr, 1, &bufMemBarrier, 0, nullptr);
18528
18529 VkBufferCopy bufCopy = {
18530 0, // srcOffset
18531 0, // dstOffset,
18532 myDataSize, // size
18533 };
18534
18535 vkCmdCopyBuffer(cmdBuf, stagingBuf, buf, 1, &bufCopy);
18536
18537 VkBufferMemoryBarrier bufMemBarrier2 = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER };
18538 bufMemBarrier2.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT;
18539 bufMemBarrier2.dstAccessMask = VK_ACCESS_UNIFORM_READ_BIT; // We created a uniform buffer
18540 bufMemBarrier2.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
18541 bufMemBarrier2.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
18542 bufMemBarrier2.buffer = buf;
18543 bufMemBarrier2.offset = 0;
18544 bufMemBarrier2.size = VK_WHOLE_SIZE;
18545
18546 vkCmdPipelineBarrier(cmdBuf, VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT,
18547 0, 0, nullptr, 1, &bufMemBarrier2, 0, nullptr);
18548}
18549\endcode
18550
18551\section usage_patterns_other_use_cases Other use cases
18552
18553Here are some other, less obvious use cases and their recommended settings:
18554
18555- An image that is used only as transfer source and destination, but it should stay on the device,
18556 as it is used to temporarily store a copy of some texture, e.g. from the current to the next frame,
18557 for temporal antialiasing or other temporal effects.
18558 - Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
18559 - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO
18560- An image that is used only as transfer source and destination, but it should be placed
18561 in the system RAM despite it doesn't need to be mapped, because it serves as a "swap" copy to evict
18562 least recently used textures from VRAM.
18563 - Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
18564 - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_HOST,
18565 as VMA needs a hint here to differentiate from the previous case.
18566- A buffer that you want to map and write from the CPU, directly read from the GPU
18567 (e.g. as a uniform or vertex buffer), but you have a clear preference to place it in device or
18568 host memory due to its large size.
18569 - Use `VkBufferCreateInfo::usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT`
18570 - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST
18571 - Use VmaAllocationCreateInfo::flags = #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT
18572
18573
18574\page configuration Configuration
18575
18576Please check "CONFIGURATION SECTION" in the code to find macros that you can define
18577before each include of this file or change directly in this file to provide
18578your own implementation of basic facilities like assert, `min()` and `max()` functions,
18579mutex, atomic etc.
18580
18581For example, define `VMA_ASSERT(expr)` before including the library to provide
18582custom implementation of the assertion, compatible with your project.
18583By default it is defined to standard C `assert(expr)` in `_DEBUG` configuration
18584and empty otherwise.
18585
18586Similarly, you can define `VMA_LEAK_LOG_FORMAT` macro to enable printing of leaked (unfreed) allocations,
18587including their names and other parameters. Example:
18588
18589\code
18590#define VMA_LEAK_LOG_FORMAT(format, ...) do { \
18591 printf((format), __VA_ARGS__); \
18592 printf("\n"); \
18593 } while(false)
18594\endcode
18595
18596\section config_Vulkan_functions Pointers to Vulkan functions
18597
18598There are multiple ways to import pointers to Vulkan functions in the library.
18599In the simplest case you don't need to do anything.
18600If the compilation or linking of your program or the initialization of the #VmaAllocator
18601doesn't work for you, you can try to reconfigure it.
18602
18603First, the allocator tries to fetch pointers to Vulkan functions linked statically,
18604like this:
18605
18606\code
18607m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
18608\endcode
18609
18610If you want to disable this feature, set configuration macro: `#define VMA_STATIC_VULKAN_FUNCTIONS 0`.
18611
18612Second, you can provide the pointers yourself by setting member VmaAllocatorCreateInfo::pVulkanFunctions.
18613You can fetch them e.g. using functions `vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` or
18614by using a helper library like [volk](https://github.com/zeux/volk).
18615
18616Third, VMA tries to fetch remaining pointers that are still null by calling
18617`vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` on its own.
18618You need to only fill in VmaVulkanFunctions::vkGetInstanceProcAddr and VmaVulkanFunctions::vkGetDeviceProcAddr.
18619Other pointers will be fetched automatically.
18620If you want to disable this feature, set configuration macro: `#define VMA_DYNAMIC_VULKAN_FUNCTIONS 0`.
18621
18622Finally, all the function pointers required by the library (considering selected
18623Vulkan version and enabled extensions) are checked with `VMA_ASSERT` if they are not null.
18624
18625
18626\section custom_memory_allocator Custom host memory allocator
18627
18628If you use custom allocator for CPU memory rather than default operator `new`
18629and `delete` from C++, you can make this library using your allocator as well
18630by filling optional member VmaAllocatorCreateInfo::pAllocationCallbacks. These
18631functions will be passed to Vulkan, as well as used by the library itself to
18632make any CPU-side allocations.
18633
18634\section allocation_callbacks Device memory allocation callbacks
18635
18636The library makes calls to `vkAllocateMemory()` and `vkFreeMemory()` internally.
18637You can setup callbacks to be informed about these calls, e.g. for the purpose
18638of gathering some statistics. To do it, fill optional member
18639VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
18640
18641\section heap_memory_limit Device heap memory limit
18642
18643When device memory of certain heap runs out of free space, new allocations may
18644fail (returning error code) or they may succeed, silently pushing some existing_
18645memory blocks from GPU VRAM to system RAM (which degrades performance). This
18646behavior is implementation-dependent - it depends on GPU vendor and graphics
18647driver.
18648
18649On AMD cards it can be controlled while creating Vulkan device object by using
18650VK_AMD_memory_overallocation_behavior extension, if available.
18651
18652Alternatively, if you want to test how your program behaves with limited amount of Vulkan device
18653memory available without switching your graphics card to one that really has
18654smaller VRAM, you can use a feature of this library intended for this purpose.
18655To do it, fill optional member VmaAllocatorCreateInfo::pHeapSizeLimit.
18656
18657
18658
18659\page vk_khr_dedicated_allocation VK_KHR_dedicated_allocation
18660
18661VK_KHR_dedicated_allocation is a Vulkan extension which can be used to improve
18662performance on some GPUs. It augments Vulkan API with possibility to query
18663driver whether it prefers particular buffer or image to have its own, dedicated
18664allocation (separate `VkDeviceMemory` block) for better efficiency - to be able
18665to do some internal optimizations. The extension is supported by this library.
18666It will be used automatically when enabled.
18667
18668It has been promoted to core Vulkan 1.1, so if you use eligible Vulkan version
18669and inform VMA about it by setting VmaAllocatorCreateInfo::vulkanApiVersion,
18670you are all set.
18671
18672Otherwise, if you want to use it as an extension:
18673
186741 . When creating Vulkan device, check if following 2 device extensions are
18675supported (call `vkEnumerateDeviceExtensionProperties()`).
18676If yes, enable them (fill `VkDeviceCreateInfo::ppEnabledExtensionNames`).
18677
18678- VK_KHR_get_memory_requirements2
18679- VK_KHR_dedicated_allocation
18680
18681If you enabled these extensions:
18682
186832 . Use #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag when creating
18684your #VmaAllocator to inform the library that you enabled required extensions
18685and you want the library to use them.
18686
18687\code
18688allocatorInfo.flags |= VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT;
18689
18690vmaCreateAllocator(&allocatorInfo, &allocator);
18691\endcode
18692
18693That is all. The extension will be automatically used whenever you create a
18694buffer using vmaCreateBuffer() or image using vmaCreateImage().
18695
18696When using the extension together with Vulkan Validation Layer, you will receive
18697warnings like this:
18698
18699_vkBindBufferMemory(): Binding memory to buffer 0x33 but vkGetBufferMemoryRequirements() has not been called on that buffer._
18700
18701It is OK, you should just ignore it. It happens because you use function
18702`vkGetBufferMemoryRequirements2KHR()` instead of standard
18703`vkGetBufferMemoryRequirements()`, while the validation layer seems to be
18704unaware of it.
18705
18706To learn more about this extension, see:
18707
18708- [VK_KHR_dedicated_allocation in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap50.html#VK_KHR_dedicated_allocation)
18709- [VK_KHR_dedicated_allocation unofficial manual](http://asawicki.info/articles/VK_KHR_dedicated_allocation.php5)
18710
18711
18712
18713\page vk_ext_memory_priority VK_EXT_memory_priority
18714
18715VK_EXT_memory_priority is a device extension that allows to pass additional "priority"
18716value to Vulkan memory allocations that the implementation may use prefer certain
18717buffers and images that are critical for performance to stay in device-local memory
18718in cases when the memory is over-subscribed, while some others may be moved to the system memory.
18719
18720VMA offers convenient usage of this extension.
18721If you enable it, you can pass "priority" parameter when creating allocations or custom pools
18722and the library automatically passes the value to Vulkan using this extension.
18723
18724If you want to use this extension in connection with VMA, follow these steps:
18725
18726\section vk_ext_memory_priority_initialization Initialization
18727
187281) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
18729Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_EXT_memory_priority".
18730
187312) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
18732Attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
18733Check if the device feature is really supported - check if `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority` is true.
18734
187353) While creating device with `vkCreateDevice`, enable this extension - add "VK_EXT_memory_priority"
18736to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
18737
187384) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
18739Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
18740Enable this device feature - attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to
18741`VkPhysicalDeviceFeatures2::pNext` chain and set its member `memoryPriority` to `VK_TRUE`.
18742
187435) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
18744have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
18745to VmaAllocatorCreateInfo::flags.
18746
18747\section vk_ext_memory_priority_usage Usage
18748
18749When using this extension, you should initialize following member:
18750
18751- VmaAllocationCreateInfo::priority when creating a dedicated allocation with #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
18752- VmaPoolCreateInfo::priority when creating a custom pool.
18753
18754It should be a floating-point value between `0.0f` and `1.0f`, where recommended default is `0.5f`.
18755Memory allocated with higher value can be treated by the Vulkan implementation as higher priority
18756and so it can have lower chances of being pushed out to system memory, experiencing degraded performance.
18757
18758It might be a good idea to create performance-critical resources like color-attachment or depth-stencil images
18759as dedicated and set high priority to them. For example:
18760
18761\code
18762VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
18763imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
18764imgCreateInfo.extent.width = 3840;
18765imgCreateInfo.extent.height = 2160;
18766imgCreateInfo.extent.depth = 1;
18767imgCreateInfo.mipLevels = 1;
18768imgCreateInfo.arrayLayers = 1;
18769imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
18770imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
18771imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
18772imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
18773imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
18774
18775VmaAllocationCreateInfo allocCreateInfo = {};
18776allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18777allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
18778allocCreateInfo.priority = 1.0f;
18779
18780VkImage img;
18781VmaAllocation alloc;
18782vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
18783\endcode
18784
18785`priority` member is ignored in the following situations:
18786
18787- Allocations created in custom pools: They inherit the priority, along with all other allocation parameters
18788 from the parameters passed in #VmaPoolCreateInfo when the pool was created.
18789- Allocations created in default pools: They inherit the priority from the parameters
18790 VMA used when creating default pools, which means `priority == 0.5f`.
18791
18792
18793\page vk_amd_device_coherent_memory VK_AMD_device_coherent_memory
18794
18795VK_AMD_device_coherent_memory is a device extension that enables access to
18796additional memory types with `VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and
18797`VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flag. It is useful mostly for
18798allocation of buffers intended for writing "breadcrumb markers" in between passes
18799or draw calls, which in turn are useful for debugging GPU crash/hang/TDR cases.
18800
18801When the extension is available but has not been enabled, Vulkan physical device
18802still exposes those memory types, but their usage is forbidden. VMA automatically
18803takes care of that - it returns `VK_ERROR_FEATURE_NOT_PRESENT` when an attempt
18804to allocate memory of such type is made.
18805
18806If you want to use this extension in connection with VMA, follow these steps:
18807
18808\section vk_amd_device_coherent_memory_initialization Initialization
18809
188101) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
18811Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_AMD_device_coherent_memory".
18812
188132) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
18814Attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
18815Check if the device feature is really supported - check if `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true.
18816
188173) While creating device with `vkCreateDevice`, enable this extension - add "VK_AMD_device_coherent_memory"
18818to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
18819
188204) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
18821Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
18822Enable this device feature - attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to
18823`VkPhysicalDeviceFeatures2::pNext` and set its member `deviceCoherentMemory` to `VK_TRUE`.
18824
188255) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
18826have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
18827to VmaAllocatorCreateInfo::flags.
18828
18829\section vk_amd_device_coherent_memory_usage Usage
18830
18831After following steps described above, you can create VMA allocations and custom pools
18832out of the special `DEVICE_COHERENT` and `DEVICE_UNCACHED` memory types on eligible
18833devices. There are multiple ways to do it, for example:
18834
18835- You can request or prefer to allocate out of such memory types by adding
18836 `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` to VmaAllocationCreateInfo::requiredFlags
18837 or VmaAllocationCreateInfo::preferredFlags. Those flags can be freely mixed with
18838 other ways of \ref choosing_memory_type, like setting VmaAllocationCreateInfo::usage.
18839- If you manually found memory type index to use for this purpose, force allocation
18840 from this specific index by setting VmaAllocationCreateInfo::memoryTypeBits `= 1u << index`.
18841
18842\section vk_amd_device_coherent_memory_more_information More information
18843
18844To learn more about this extension, see [VK_AMD_device_coherent_memory in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VK_AMD_device_coherent_memory.html)
18845
18846Example use of this extension can be found in the code of the sample and test suite
18847accompanying this library.
18848
18849
18850\page vk_khr_external_memory_win32 VK_KHR_external_memory_win32
18851
18852On Windows, the VK_KHR_external_memory_win32 device extension allows exporting a Win32 `HANDLE`
18853of a `VkDeviceMemory` block, to be able to reference the memory on other Vulkan logical devices or instances,
18854in multiple processes, and/or in multiple APIs.
18855VMA offers support for it.
18856
18857\section vk_khr_external_memory_win32_initialization Initialization
18858
188591) Make sure the extension is defined in the code by including following header before including VMA:
18860
18861\code
18862#include <vulkan/vulkan_win32.h>
18863\endcode
18864
188652) Check if "VK_KHR_external_memory_win32" is available among device extensions.
18866Enable it when creating the `VkDevice` object.
18867
188683) Enable the usage of this extension in VMA by setting flag #VMA_ALLOCATOR_CREATE_KHR_EXTERNAL_MEMORY_WIN32_BIT
18869when calling vmaCreateAllocator().
18870
188714) Make sure that VMA has access to the `vkGetMemoryWin32HandleKHR` function by either enabling `VMA_DYNAMIC_VULKAN_FUNCTIONS` macro
18872or setting VmaVulkanFunctions::vkGetMemoryWin32HandleKHR explicitly.
18873For more information, see \ref quick_start_initialization_importing_vulkan_functions.
18874
18875\section vk_khr_external_memory_win32_preparations Preparations
18876
18877You can find example usage among tests, in file "Tests.cpp", function `TestWin32Handles()`.
18878
18879To use the extenion, buffers need to be created with `VkExternalMemoryBufferCreateInfoKHR` attached to their `pNext` chain,
18880and memory allocations need to be made with `VkExportMemoryAllocateInfoKHR` attached to their `pNext` chain.
18881To make use of them, you need to use \ref custom_memory_pools. Example:
18882
18883\code
18884// Define an example buffer and allocation parameters.
18885VkExternalMemoryBufferCreateInfoKHR externalMemBufCreateInfo = {
18886 VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_BUFFER_CREATE_INFO_KHR,
18887 nullptr,
18888 VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT
18889};
18890VkBufferCreateInfo exampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18891exampleBufCreateInfo.size = 0x10000; // Doesn't matter here.
18892exampleBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
18893exampleBufCreateInfo.pNext = &externalMemBufCreateInfo;
18894
18895VmaAllocationCreateInfo exampleAllocCreateInfo = {};
18896exampleAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18897
18898// Find memory type index to use for the custom pool.
18899uint32_t memTypeIndex;
18900VkResult res = vmaFindMemoryTypeIndexForBufferInfo(g_Allocator,
18901 &exampleBufCreateInfo, &exampleAllocCreateInfo, &memTypeIndex);
18902// Check res...
18903
18904// Create a custom pool.
18905constexpr static VkExportMemoryAllocateInfoKHR exportMemAllocInfo = {
18906 VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR,
18907 nullptr,
18908 VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT
18909};
18910VmaPoolCreateInfo poolCreateInfo = {};
18911poolCreateInfo.memoryTypeIndex = memTypeIndex;
18912poolCreateInfo.pMemoryAllocateNext = (void*)&exportMemAllocInfo;
18913
18914VmaPool pool;
18915res = vmaCreatePool(g_Allocator, &poolCreateInfo, &pool);
18916// Check res...
18917
18918// YOUR OTHER CODE COMES HERE....
18919
18920// At the end, don't forget to destroy it!
18921vmaDestroyPool(g_Allocator, pool);
18922\endcode
18923
18924Note that the structure passed as VmaPoolCreateInfo::pMemoryAllocateNext must remain alive and unchanged
18925for the whole lifetime of the custom pool, because it will be used when the pool allocates a new device memory block.
18926No copy is made internally. This is why variable `exportMemAllocInfo` is defined as `static`.
18927
18928\section vk_khr_external_memory_win32_memory_allocation Memory allocation
18929
18930Finally, you can create a buffer with an allocation out of the custom pool.
18931The buffer should use same flags as the sample buffer used to find the memory type.
18932It should also specify `VkExternalMemoryBufferCreateInfoKHR` in its `pNext` chain.
18933
18934\code
18935VkExternalMemoryBufferCreateInfoKHR externalMemBufCreateInfo = {
18936 VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_BUFFER_CREATE_INFO_KHR,
18937 nullptr,
18938 VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT
18939};
18940VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18941bufCreateInfo.size = // Your desired buffer size.
18942bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
18943bufCreateInfo.pNext = &externalMemBufCreateInfo;
18944
18945VmaAllocationCreateInfo allocCreateInfo = {};
18946allocCreateInfo.pool = pool; // It is enough to set this one member.
18947
18948VkBuffer buf;
18949VmaAllocation alloc;
18950res = vmaCreateBuffer(g_Allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, nullptr);
18951// Check res...
18952
18953// YOUR OTHER CODE COMES HERE....
18954
18955// At the end, don't forget to destroy it!
18956vmaDestroyBuffer(g_Allocator, buf, alloc);
18957\endcode
18958
18959If you need each allocation to have its own device memory block and start at offset 0, you can still do
18960by using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag. It works also with custom pools.
18961
18962\section vk_khr_external_memory_win32_exporting_win32_handle Exporting Win32 handle
18963
18964After the allocation is created, you can acquire a Win32 `HANDLE` to the `VkDeviceMemory` block it belongs to.
18965VMA function vmaGetMemoryWin32Handle() is a replacement of the Vulkan function `vkGetMemoryWin32HandleKHR`.
18966
18967\code
18968HANDLE handle;
18969res = vmaGetMemoryWin32Handle(g_Allocator, alloc, nullptr, &handle);
18970// Check res...
18971
18972// YOUR OTHER CODE COMES HERE....
18973
18974// At the end, you must close the handle.
18975CloseHandle(handle);
18976\endcode
18977
18978Documentation of the VK_KHR_external_memory_win32 extension states that:
18979
18980> If handleType is defined as an NT handle, vkGetMemoryWin32HandleKHR must be called no more than once for each valid unique combination of memory and handleType.
18981
18982This is ensured automatically inside VMA.
18983The library fetches the handle on first use, remembers it internally, and closes it when the memory block or dedicated allocation is destroyed.
18984Every time you call vmaGetMemoryWin32Handle(), VMA calls `DuplicateHandle` and returns a new handle that you need to close.
18985
18986For further information, please check documentation of the vmaGetMemoryWin32Handle() function.
18987
18988
18989\page enabling_buffer_device_address Enabling buffer device address
18990
18991Device extension VK_KHR_buffer_device_address
18992allow to fetch raw GPU pointer to a buffer and pass it for usage in a shader code.
18993It has been promoted to core Vulkan 1.2.
18994
18995If you want to use this feature in connection with VMA, follow these steps:
18996
18997\section enabling_buffer_device_address_initialization Initialization
18998
189991) (For Vulkan version < 1.2) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
19000Check if the extension is supported - if returned array of `VkExtensionProperties` contains
19001"VK_KHR_buffer_device_address".
19002
190032) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
19004Attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
19005Check if the device feature is really supported - check if `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress` is true.
19006
190073) (For Vulkan version < 1.2) While creating device with `vkCreateDevice`, enable this extension - add
19008"VK_KHR_buffer_device_address" to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
19009
190104) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
19011Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
19012Enable this device feature - attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to
19013`VkPhysicalDeviceFeatures2::pNext` and set its member `bufferDeviceAddress` to `VK_TRUE`.
19014
190155) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
19016have enabled this feature - add #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
19017to VmaAllocatorCreateInfo::flags.
19018
19019\section enabling_buffer_device_address_usage Usage
19020
19021After following steps described above, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*` using VMA.
19022The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT*` to
19023allocated memory blocks wherever it might be needed.
19024
19025Please note that the library supports only `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*`.
19026The second part of this functionality related to "capture and replay" is not supported,
19027as it is intended for usage in debugging tools like RenderDoc, not in everyday Vulkan usage.
19028
19029\section enabling_buffer_device_address_more_information More information
19030
19031To learn more about this extension, see [VK_KHR_buffer_device_address in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap46.html#VK_KHR_buffer_device_address)
19032
19033Example use of this extension can be found in the code of the sample and test suite
19034accompanying this library.
19035
19036\page general_considerations General considerations
19037
19038\section general_considerations_thread_safety Thread safety
19039
19040- The library has no global state, so separate #VmaAllocator objects can be used
19041 independently.
19042 There should be no need to create multiple such objects though - one per `VkDevice` is enough.
19043- By default, all calls to functions that take #VmaAllocator as first parameter
19044 are safe to call from multiple threads simultaneously because they are
19045 synchronized internally when needed.
19046 This includes allocation and deallocation from default memory pool, as well as custom #VmaPool.
19047- When the allocator is created with #VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT
19048 flag, calls to functions that take such #VmaAllocator object must be
19049 synchronized externally.
19050- Access to a #VmaAllocation object must be externally synchronized. For example,
19051 you must not call vmaGetAllocationInfo() and vmaMapMemory() from different
19052 threads at the same time if you pass the same #VmaAllocation object to these
19053 functions.
19054- #VmaVirtualBlock is not safe to be used from multiple threads simultaneously.
19055
19056\section general_considerations_versioning_and_compatibility Versioning and compatibility
19057
19058The library uses [**Semantic Versioning**](https://semver.org/),
19059which means version numbers follow convention: Major.Minor.Patch (e.g. 2.3.0), where:
19060
19061- Incremented Patch version means a release is backward- and forward-compatible,
19062 introducing only some internal improvements, bug fixes, optimizations etc.
19063 or changes that are out of scope of the official API described in this documentation.
19064- Incremented Minor version means a release is backward-compatible,
19065 so existing code that uses the library should continue to work, while some new
19066 symbols could have been added: new structures, functions, new values in existing
19067 enums and bit flags, new structure members, but not new function parameters.
19068- Incrementing Major version means a release could break some backward compatibility.
19069
19070All changes between official releases are documented in file "CHANGELOG.md".
19071
19072\warning Backward compatibility is considered on the level of C++ source code, not binary linkage.
19073Adding new members to existing structures is treated as backward compatible if initializing
19074the new members to binary zero results in the old behavior.
19075You should always fully initialize all library structures to zeros and not rely on their
19076exact binary size.
19077
19078\section general_considerations_validation_layer_warnings Validation layer warnings
19079
19080When using this library, you can meet following types of warnings issued by
19081Vulkan validation layer. They don't necessarily indicate a bug, so you may need
19082to just ignore them.
19083
19084- *vkBindBufferMemory(): Binding memory to buffer 0xeb8e4 but vkGetBufferMemoryRequirements() has not been called on that buffer.*
19085 - It happens when VK_KHR_dedicated_allocation extension is enabled.
19086 `vkGetBufferMemoryRequirements2KHR` function is used instead, while validation layer seems to be unaware of it.
19087- *Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.*
19088 - It happens when you map a buffer or image, because the library maps entire
19089 `VkDeviceMemory` block, where different types of images and buffers may end
19090 up together, especially on GPUs with unified memory like Intel.
19091- *Non-linear image 0xebc91 is aliased with linear buffer 0xeb8e4 which may indicate a bug.*
19092 - It may happen when you use [defragmentation](@ref defragmentation).
19093
19094\section general_considerations_allocation_algorithm Allocation algorithm
19095
19096The library uses following algorithm for allocation, in order:
19097
19098-# Try to find free range of memory in existing blocks.
19099-# If failed, try to create a new block of `VkDeviceMemory`, with preferred block size.
19100-# If failed, try to create such block with size / 2, size / 4, size / 8.
19101-# If failed, try to allocate separate `VkDeviceMemory` for this allocation,
19102 just like when you use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
19103-# If failed, choose other memory type that meets the requirements specified in
19104 VmaAllocationCreateInfo and go to point 1.
19105-# If failed, return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
19106
19107\section general_considerations_features_not_supported Features not supported
19108
19109Features deliberately excluded from the scope of this library:
19110
19111-# **Data transfer.** Uploading (streaming) and downloading data of buffers and images
19112 between CPU and GPU memory and related synchronization is responsibility of the user.
19113 Defining some "texture" object that would automatically stream its data from a
19114 staging copy in CPU memory to GPU memory would rather be a feature of another,
19115 higher-level library implemented on top of VMA.
19116 VMA doesn't record any commands to a `VkCommandBuffer`. It just allocates memory.
19117-# **Recreation of buffers and images.** Although the library has functions for
19118 buffer and image creation: vmaCreateBuffer(), vmaCreateImage(), you need to
19119 recreate these objects yourself after defragmentation. That is because the big
19120 structures `VkBufferCreateInfo`, `VkImageCreateInfo` are not stored in
19121 #VmaAllocation object.
19122-# **Handling CPU memory allocation failures.** When dynamically creating small C++
19123 objects in CPU memory (not Vulkan memory), allocation failures are not checked
19124 and handled gracefully, because that would complicate code significantly and
19125 is usually not needed in desktop PC applications anyway.
19126 Success of an allocation is just checked with an assert.
19127-# **Code free of any compiler warnings.** Maintaining the library to compile and
19128 work correctly on so many different platforms is hard enough. Being free of
19129 any warnings, on any version of any compiler, is simply not feasible.
19130 There are many preprocessor macros that make some variables unused, function parameters unreferenced,
19131 or conditional expressions constant in some configurations.
19132 The code of this library should not be bigger or more complicated just to silence these warnings.
19133 It is recommended to disable such warnings instead.
19134-# This is a C++ library with C interface. **Bindings or ports to any other programming languages** are welcome as external projects but
19135 are not going to be included into this repository.
19136*/
19137

source code of qtbase/src/3rdparty/VulkanMemoryAllocator/vk_mem_alloc.h