Added SortList class, and updated List, SelfList, and HashMap sort methods to use it. Sorting is done with merge sort, with an initial check to optimize for already sorted lists, and sorted lists that were appended to.
This in the scope of a duplication triggered via any type in the `Variant` realm. that is, the following: `Variant` itself, `Array` and `Dictionary`. That includes invoking `duplicate()` from scripts.
A `duplicate_deep(deep_subresources_mode)` method is added to `Variant`, `Array` and `Dictionary` (for compatibility reasons, simply adding an extra parameter was not possible). The default value for it is `RESOURCE_DEEP_DUPLICATE_NONE`, which is like calling `duplicate(true)`.
Remarks:
- The results of copying resources via those `Variant` types are exactly the same as if the copy were initiated from the `Resource` type at C++.
- In order to keep some separation between `Variant` and the higher-level animal which is `Resource`, `Variant` still contains the original code for that, so it's self-sufficient unless there's a `Resource` involved. Once the deep copy finds a `Resource` that has to be copied according to the duplication parameters, the algorithm invokes the `Resource` duplication machinery. When the stack is unwind back to a nesting level `Variant` can handle, `Variant` duplication logic keeps functioning.
While that is good from a responsibility separation standpoint, that would have a caveat: `Variant` would not be aware of the mapping between original and duplicate subresources and so wouldn't be able to keep preventing multiple duplicates.
To avoid that, this commit also introduces a wormwhole, a sharing mechanism by which `Variant` and `Resource` can collaborate in managing the lifetime of the original-to-duplicates map. The user-visible benefit is that the overduplicate prevention works as broadly as the whole `Variant` entity being copied, including all nesting levels, regardless how disconnected the data members containing resources may be across al the nesting levels. In other words, despite the aforementioned division of duties between `Variant` and `Resource` duplication logic, the duplicates map is shared among them. It's created when first finding a `Resource` and, however how deep the copy was working at that point, the map kept alive unitl the stack is unwind to the root user call, until the first step of the recursion.
Thanks to that common map of duplicates, this commit is able to fix the issue that `Resource::duplicate_for_local_scene()` used to ignore overridden duplicate logic.
Thanks to a refactor, `Resource::duplicate_for_local_scene()` and `Resource::duplicate()` are now both users of the same, parametrized, implementation.
`Resource::duplicate()` now honors deepness in a more consistent and predictable fashion. `Resource::duplicate_deep()` is added (instead of just adding a parameter to the former, for compatibility needs).
The behavior after this change is as follows:
- Deep (`deep=true`, formerly `subresources=true`):
- Previously, only resources found as direct property values of the one to copy would be, recursively, duplicated.
- Now, in addition, arrays and dictionaries are walked so the copy is truly deep, and only local subresources found across are copied.
- Previously, subresources would be duplicated as many times as being referenced throughout the main resource.
- Now, each subresource is only duplicated once and from that point, a referenced to that single copy is used. That's the enhanced behavior that `duplicate_for_local_scene()` already featured.
- The behavior with respect to packed arrays is still duplication.
- Formerly, arrays and dictionaries were recursive duplicated, with resources ignored.
- Now, arrays and dictionaries are recursive duplicated, with resources duplicated.
- When doing it through `duplicate_deep()`, there's a` deep_subresources_mode` parameter, with various possibilites to control if no resources are duplicated (so arrays, etc. are, but keeping referencing the originals), if only the internal ones are (resources with no non-local path, the default), or if all of them are. The default is to copy every subresource, just like `duplicate(true)`.
- Not deep (`deep=false`, formerly `subresources=false`): <a name="resource-shallow"></a>
- Previously, the first level of resources found as direct property values would be duplicated unconditionally. Packed arrays, arrays and dictionaries were non-recursively duplicated.
- Now, no subresource found at any level in any form will be duplicated, but the original reference kept instead. Packed arrays, arrays and dictionaries are referenced, not duplicated at all.
- Now, resources found as values of always-duplicate properties are duplicated, recursively or not matching what was requested for the root call.
This commit also changes what's the virtual method to override to customize the duplication (now it's the protected `_duplicate()` instead of the public `duplicate()`).
- Serves as a first step for future refactors.
- Code is simpler.
- Algorithm is more efficient: instead of two passes (dumb copy + resolve copies), it's single-pass.
- Now obeys `PROPERTY_USAGE_NEVER_DUPLICATE`.
- Now handles deep self-references (the resource to be duplicated being referenced somewhere deep).
This was added together with `ppc64le` in #54490, but seemingly only for the
purpose of getting it to compile on a Linux distro that aims at maximizing
support for all CPU architectures.
I don't think anyone has ever _run_ Godot on a `ppc32` system (do those even
support OpenGL ES 3.0?) and so I don't think we should aim to support it.
Debian dropped support for its PowerPC (`ppc32`) arch in Debian 9, released
in 2017.
• Instead of calling to `_ref`, the same effect is achieved by calling to the base class assignment operator
• No longer need to be expose `_ref`; set back to private & remove reference from gdextension_interface
`variant_utility.cpp` has 8 different copies of the exact same loop to join
vararg strings together. This is bad coding process and makes the codebase
needlessly cluttered. Also, the loop itself has an i == 0 check that is
unnecessary since String is auto-initialized to be empty and String::operator+=
already checks if it is empty before appending. This is a non-functional change
that only reorganizes the code to be a bit more readable.
Removed calls to String::chr() when appending characters to Strings in Expression, Resource, and VariantParser, to avoid creating temporary Strings for each character. Also updated the Resource case to resize String up front, since size is known.
When we first integrated mbedTLS, we decided not to enable
MBEDTLS_THREADING_C (which adds mutex locking to calls modifying the
state), and instead to simply create separate contexts ("states") for
each connection.
This worked fine until recently.
Sadly, mbedTLS 3 added a global state for the new PSA crypto
functionalities (which are required to support TLSv1.3).
This results in TLSv1.3 connections to access and modify the global
state concurrently when running in threads.
This commit enables MBEDTLS_THREADING_C, and MBEDTLS_THREADING_C_ALT to
provide a generic Godot implementation using the engine Mutex class.
- Changed stringify to call static function _stringify directly, instead of creating JSON object
- Changed colon and end_statement from String to const char * to avoid extra allocations in each _stringify call
- Pass result String reference to each _stringify call to append to instead of allocating new String in each call
These changes make JSON::stringify around 2-3x faster in most cases
- Avoid temporary copy of p_array in Array::append_array when types match
- Call ptrw() once before looping in methods that return new Arrays, to avoid copy_on_write call for each item (recursive_duplicate, slice, filter, map)