SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability architecture
+ ++ ++SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability is a cloud native solution that comprises multiple + software building blocks. These blocks include the Linux operating + system, Kubernetes cluster with a Web UI management layer, supportive tools + to utilize GPU capabilities, and other containerized applications that + care for monitoring and security. The SUSE Application Collection includes a collection of + AI-related applications called AI Library. +
+
SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability building blocks
The underlying operating system with the optional NVIDIA driver + installed. We prefer SUSE Linux Enterprise Server + (). If you + require an immutable operating system, SLE Micro is the recommended + alternative. +
+Kubernetes cluster managed by SUSE Rancher Prime ensuring container and + application lifecycle management. We recommend using the SUSE Rancher Prime: RKE2 + () + distribution. +
+Utilizes the NVIDIA GPU computing power and capabilities for + processing AI-related tasks. +
+For security and compliance. +
+Provides advanced performance and data monitoring. +
+Enterprise-grade storage solution. +
+For virtualized workloads. +
+For managing multiple Linux distributions. +
+As a source of Helm charts and container images for the AI Library + applications. +
+AI Library applications
An extensible X.509 certificate controller for Kubernetes workloads. +
+A search and analytics suite for analyzing and visualizing search data. +
+A vector database built for generative AI applications with minimal + performance loss. +
+A platform that simplifies the installation and management of large + language models (LLM) on local devices. +
+An extensible Web user interface for the Ollama LLM runner. +
+A high-performance inference and serving engine for large language + models (LLMs). +
+The MCP-to-OpenAPI proxy server provided by Open WebUI. +
+An open source machine learning framework. +
+An open source platform to manage the machine learning lifecycle, + including experimentation, reproducibility, deployment and a central + model registry. +
+Figure 1. Basic schema of SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability
+ +
+
Update strategy of desktop components
+ + ++ +
The SLE 16 desktop environment provides the following components: +
+The minimum version is 48.0. The exact version must be determined before the +end of beta releases. +
+These components are updated to the latest stable branch version at the last beta release. +
+The minimum version is 140.3. The update strategy follows the ESR update lifecycle. +
+The minimum version is 2.46. WebKit has a periodic update cycle + determined by critical CVEs. The component is updated but not backported. +
+The accessibility tools are at the latest upstream stable versions in the 16.0. Only bug fixes will +happen in this minor release. +
+The Qt 6 is delivered. The initial version is Qt 6.9. +
+Not a standard SUSE Linux delivery; available only in PackageHub 16. +
+Package maintainer's FAQ
+ + ++ +
Raise a Jira ticket so we can track and document the process. If you are not sure about + the lifecycle category, contact your manager and a SLEA architect. +
+
The further down in the stack your component is, the more packages depend on + it, so you should take a more conservative approach. When in doubt, contact the + SLEA architects. +
+I want to use a balanced lifecycle for my package. Should I go for several versions in parallel?
+
In general, it is better to replace the old package with a new version. It creates less + overhead on the maintainer side and is less confusing to customers. A sliding window + is useful for most toolchain components that have API or ABI changes as part of + new releases. However, the particular approach depends on the amount and type of changes. + For example, if the changes can be adopted easily or even automated, then it is better to + just update the package. When changes lead to conflicts that neither we nor the customer can easily resolve, then maintaining parallel versions is a better option. +
+SUSE tried this on a case-by-case basis for Python modules. +
+Should I convert my package from a balanced lifecycle to an agile one, and release updated versions also to code streams under LTS?
+
This is a question that we need to investigate in every case. In general, if + the particular package is beneficial for customers on the older releases, then it may be + worth releasing the updated packages to older releases. For example, if a component is + mostly about operations, then it may be useful for customers on older releases. On + the contrary, newer packages that provide hardware enablement are usually not needed, as + customers' hardware has not changed since the time of installation. +
+Python update strategy
+ + ++ +
The support lifecycles of the Python interpreter are the following: +
+It is supported for 2 years. The support lifecycle applies also to basic packages: + setuptools, venv, pip, wheel and pipx. +
+The OS's main Python. It is supported for 4 years and includes the interpreter and its stack, + /usr/bin/python3 and all packages with the python3- + prefix. +
+It is supported for 4 years and includes the interpreter and stack but not the +python3- packages. A legacy interpreter is a former primary Python. Short-term interpreters, by contrast, follow a different lifecycle and are becoming legacy. +
+The starting version of Python in SUSE Linux 16 is Python 3.13. SUSE plans to release every + odd version of Python and provide the LTS for up to 8 years. +
+The latest Python interpreter is delivered with each minor release with only short-term support. The + outdated Python version is migrated to the legacy mode and is supported for another 4 years. +
+The following table shows probable versions of Python to be delivered with particular + SUSE Linux versions. The exact version will depend on the upstream community. +
+ +Update strategy of toolchain components
+ + ++ +
The initial glibc version is 2.40. The package is + updated with each minor SUSE Linux release if there are reasons for changes to the package (for + example feature requests, performance tuning and so on). +
+Package updates provide backward compatibility for dynamic linking, allowing programs built + on previous SUSE Linux 16 releases to run. On the contrary, symbols deprecated in the + upstream glibc version will not be declared for the compiler and not + available for link editing (static linking). Such cases, when source level and static linking + backward compatibility is not guaranteed, are properly documented. +
+Developers of user-space applications can use the supported GNU Compiler Collection (GCC) C + and C++ built-in compilers. Compilers for other languages, cross-compilers and accelerator + offloading compilers are not available on SUSE Linux from standard repositories, but developers + can install them from Package Hub with community support. +
+The initial major version in SUSE Linux 16.0 and 16.1 is GCC 15. Later SUSE Linux releases will introduce the + tick-tock model: +
+Each even minor release of SUSE Linux (the tick release) introduces a new major version of + GCC. This GCC version is supported during the LTS for the minor version that introduced + it and also for the next SUSE Linux minor version. For example, if GCC 17 is introduced in + SUSE Linux 16.2, it will be supported until the end of LTS for SUSE Linux 16.3. +
+Each odd minor release of SUSE Linux (the tock release) comes with a new non-default major + version of GCC. As this version is not the default one, you must explicitly + invoke the binaries gcc-x, g++-x and + gfortran-x to use it. These non-default versions are supported for 24 months. +
+Any combination of the following compiler flags is supported: +
+-O0, -O2 and -O3 +
+-ffast-math +
+-flto +
+-fpie and -fno-pie +
+-fPIC +
+-g +
++ The following options are also supported on AMD64/Intel 64: +
+-march=x86-64-v2 (the + default one) +
+-march=x86-64-v3 +
+-march=x86-64-v4 +
+Other compiler flags are not supported by SUSE. However, we can assist in reporting issues + to the upstream GCC project. +
+For the C language, the most recent supported version is ISO/IEC 9899:2024 (known as C23) + with GNU extensions. +
+For C++, the most recent supported version is ISO/IEC 14882:2017 (known as C++17) with GNU extensions. +
+SUSE also provides unsupported packages with compilers for Fortran, Ada and Go (gcc-go). +
+To build a kernel module, you need the same compiler version used to build the + kernel. Therefore, SUSE provides the same GCC version—initially GCC 13.N. +
++ The compiler for kernel modules is not intended for general use. The kernel module compiler + may also be dropped in a future minor release of SUSE Linux. +
+There is a default compiler that is used internally to build SUSE Linux 16 packages in the version GCC +13. The build compiler is provided as an unsupported package in PackageHub. +
+Package maintainers can use the newer GCC available in the internal build service, but in this + case, they must be aware of possible ABI issues (for example, avoid linking code written in different C++ standards). +
+GCC runtime libraries (libgcc, libstdc++) are updated to the +versions of a new major version of GCC on a yearly basis in all SUSE Linux minor releases under LTS. +The runtime libraries are updated during maintenance updates. +
+The runtime libraries are fully supported during the general support and LTS of each SUSE Linux + minor version. +
+When security incidents may happen, these runtime libraries are the only part of the + toolchain that can be used for that matter. +
+The GNU Binutils are upgraded to the latest upstream version in all SUSE Linux 16 minor releases + under general support of LTS. +
+GDB is updated to the newest major version on all SUSE Linux 16 minor releases under general support + or LTS. However, this means that certain functionality may be removed from the package. +
+LLVM is available for use exclusively with MESA. Any other use of LLVM is not supported. Front-ends like Clang are not provided on SUSE Linux. You + may get them only from a community-supported repository. +
+SUSE maintains backward dynamic-linking compatibility for glibc and the C++ + compiler runtime library. This means that a binary built on an earlier minor version of SUSE Linux + 16 runs correctly on a later minor release. +
+Features deprecated by upstream are removed either from newer major versions of compilers or + from all toolchain components in later SUSE Linux minor versions. In the case of + glibc, the deprecated symbols are then removed from header files and are + not available for link editing. Therefore, the code using these symbols will no longer compile or (statically) link. +
+Development tools like the compiler are not hardened to process untrusted input. In contrast, + the GCC and C++ runtime libraries are the only parts of the toolchain that are hardened for this purpose. +
+Types of package life cycles
+ + ++ +
When categorizing a package, the impact of its changes on the system is considered. To estimate this impact, the interfaces of the component must first be identified. For example, in the case of a shared library, changes to its API or ABI may disrupt the system. For a compiler or interpreter, disruptive changes may involve supported languages, command-line options, or the performance of the compiled code. By contrast, a minor backward compatible change may have little to no impact. +
+In general, a package falls exactly into one of the categories: stable, balanced, or agile. However, some technologies may have + packages sorted into several categories, for example, Python. The following sections + describe the package categories in detail. +
+Packages that are marked as stable* (also called conservative) are those that do not deliver a disruptive change + while a customer is on any of the 16 minor versions. During the upgrade to another minor + version, the package version may change but the newer version does not introduce incompatible behavior. Customers expect to have LTS on + these packages. +
+The packages belonging to this category can change, but the following criteria apply: +
+changes in functionality are backward compatible: functionality can be added, but not removed +
+changes to interfaces are backward compatible +
+the default behavior of applications does not change unexpectedly +
+Under exceptional circumstances like serious security issues, the package can be updated even + at the cost of bringing disruptive changes. Alternatively, if a new version of a package + contains disruptive changes, this version can be delivered as an alternative to the previous one. +
+A typical example of a stable package is util-linux. Another is + glibc, which remains backward compatible except for symbols deprecated upstream. +
+Packages categorized as balanced are changing (driven by upstream evolution and customer demands), but should not cause disruptive changes + within one minor version. A few incompatible changes are possible, but always + documented in the release notes for that particular minor release. +
+Customers expect a moderate number of changes during the upgrade from a minor release. + However, the transition must be smooth, either by getting back to the original behavior or by + providing the older version in parallel with the new one. +
+When change is being introduced, it can be in one of the following ways: +
+A single version is provided with the minor release. The new version replaces the previous one while +allowing for smooth transitions between the versions. New versions are released only with a new +minor release. +
+Two versions are provided simultaneously. The new version is introduced in addition to the + existing version, which then becomes obsolete with the next minor release. +
+Such versions are supported at least until the end of the LTS of the minor release that introduced them. +
+To help with incorporating changes in a conservative environment, the tick-tock model can be + used. For example, we could mark even-numbered minor releases as tick + releases and odd-numbered minor releases as tock. These tock releases could still introduce version updates in packages with a balanced lifecycle, but these updates remain fully backward compatible in all relevant aspects. +
+For ISVs, a stable runtime environment is critical. Therefore, not to break third-party + applications, in the case of shared libraries where SUSE provides the corresponding + -devel package, the older .so version is not deprecated + immediately. For example, for a package called foo, there are packages: + libfoo-0_1, foo-devel and foo-utils. + If the package is updated and the shared library version changes to + libfoo-0_2, libfoo-0_1 is not removed. +
+ +Typical examples of balanced packages include the following: +
+systemd: changes should be backward compatible; incompatible changes are + documented in the release notes +
+kernel: the kernel is updated with each minor release +
+virtualization components: they are updated with each minor release +
+MariaDB: a new minor version with each minor release +
+PostgreSQL: upstream versions are released on a roughly annual basis, so the + new version is introduced either with a minor release or as part of a maintenance update for + an older minor release +
+Python: a version supported for a longer period of time with a set of modules on top +
+Packages categorized as agile are up to date with upstream even though they may bring + incompatible changes to the system. +
+Updates to these packages are done in two possible ways: +
+With the release of a new package version, one or more older versions are supported for + customers that cannot switch easily to the new version. There is a sliding window in + which different versions are supported concurrently (for example, for version N and + version N - 1). These concurrent versions are supported for a certain period of time that + can differ from the lifetime of the minor release. +
+The package is just updated without support for the older version. +
+All new package versions are released simultaneously to all minor releases under general support and generally also in LTS. +
+The following packages are categorized as agile: +
+Go and Rust: a new version is released roughly every 6 months with a sliding + window of 2 months. +
+GCC: a new version of the compiler is released with each minor release with a + sliding window of 2. However, the libraries libgcc and + libstdc++ are categorized as stable. +
+CLI and SDK for Public Cloud: a new version every quarter, and the new version + replaces the previous one. +
+Python interpreter, library and pip: a new version is released annually. +
+Data files for time zones: a new version is released when a new set of definitions becomes available. +
+Valkey update strategy
+ + ++ +
On SUSE Linux Enterprise 16, Valkey is updated and supported according to the following rules: +
+The Valkey version is kept in sync with upstream +
+Each SUSE Linux Enterprise 16 minor release has the latest available version +
+Valkey is supported for all SUSE Linux Enterprise 16 minor releases under the general support and LTS +
+Backward compatible Valkey minor releases are published promptly after the corresponding upstream release +
+Patch version updates are delivered continuously to SUSE Linux Enterprise minor releases as soon as the + updates are released +
+Security fixes are backported if the upstream project does not issue a new release incorporating the fix +
+GPU hardware for AI/ML workloads
+ ++ +
Configuring and managing nodes with hardware resources can require + multiple configurations for software components. These include drivers, + container runtimes and libraries. To use NVIDIA GPUs in a Kubernetes + cluster, you need to configure the NVIDIA GPU Operator. Because GPU is a special + resource in the cluster, you need to install the following components to + enable deployment of workloads for processing on the GPU. +
+NVIDIA drivers (to enable CUDA) +
+Kubernetes device plug-in +
+Container runtime +
+Other tools to provide capabilities such as monitoring or automatic + node labeling +
+To ensure that the NVIDIA GPU Operator is installed correctly, the Kubernetes cluster + must meet the following prerequisites: +
+All worker nodes must run the same operating system version to use the + NVIDIA GPU Driver container. +
+Nodes must be configured with a container engine, such as Docker + (CE/EE), containerd or Podman. +
+Nodes should be equipped with NVIDIA GPUs. +
+Nodes should have NVIDIA drivers installed. +
+The NVIDIA GPU Operator is compatible with a range of NVIDIA GPUs. For a full + list of supported GPUs, refer to + NVIDIA GPU Operator + Platform Support. +
+SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability hardware requirements
+ ++ ++For successful deployment and operation, SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability has the same + hardware prerequisites as an + SUSE Rancher Prime: RKE2 + cluster. For requirements of individual applications, refer to + . +
+
At least 32 GB of RAM per node. This is the minimum + recommendation for the control plane node. Additional resources may + be needed for the worker nodes based on workload. +
+A multicore processor with a minimum of 4 cores. 8 cores or more may + be necessary depending on the cluster scale and application demands. +
+50 GB or more is recommended for control plane nodes. +
+Additional space for data storage, such as application data or + log files, is required depending on the deployment scale and the + workloads running on the cluster. +
+SSDs or high-speed storage are preferred for faster data access + and efficient operation of containerized workloads. +
+A reliable and stable network connection between all nodes in + the cluster. +
+Cluster nodes must have valid DNS A records following the + *.apps.CLUSTER_DOMAIN pattern. The + nodes must be able to communicate with each other and access + external resources, such as container images or software + updates. +
+Ensure that all nodes have public IP addresses or are accessible + via VPN or other private network if deploying across multiple + data centers. +
+|
While 32 GB of RAM is the minimum for basic functionality, a + production-grade deployment with high availability, multi-node clusters, + or running resource-intensive applications like AI/ML workloads might + require more. + + |
64 GB or more per node is recommended for larger clusters or to + run applications with high resource demands. +
+At least 8 cores, ideally 16 or more cores, depending on the + expected load. +
+For larger-scale clusters or persistent storage applications, + 100 GB or more of disk space per node might be required. +
+Using high-performance SSDs is recommended, especially for + workloads with high I/O requirements, such as databases or AI/ML + model training. +
+Ensure a low-latency, high-throughput network for efficient + communication between nodes, especially if deploying in multi-region + or multi-cloud environments. +
+|
For more detailed hardware recommendations, refer to the official SUSE Rancher Prime: RKE2 + installation requirements documentation at + . + + |
Milvus requirements
+ ++ +
The following requirements are for basic Milvus deployment on a single + node or a small scale. +
+A minimum of 32 GB of RAM. +
+At least 8 CPU cores. +
+At least 100 GB of storage, preferably SSD. +
+A stable connection with 1 Gbps network bandwidth. +
+The following requirements are for multi-node Milvus clusters or heavy + workloads, such as large vector databases. +
+A minimum of 64 GB of RAM per node. +
+8–16 CPU cores per node or more. +
+500 GB or more of high-speed storage, ideally SSD or NVMe + SSD. +
+10 Gbps Ethernet or faster for high-performance clusters. +
+The following CPU instruction sets are required for Milvus: +
+SSE4.2 +
+AVX +
+AVX2 +
+AVX-512 +
+Running Milvus requires specific versions of the following software: +
+SUSE-supported versions of SUSE Rancher Prime: RKE2 that use Kubernetes 1.18 or higher. +
+The recommended version is 3.5.0 or later. +
+Storage type: SSDs or NVMe SSDs + are highly recommended for fast read/write access to large + datasets and high-performance vector retrieval. +
+Metadata and data storage: For + large-scale deployments, ensure that metadata and vector data are + stored on fast disks (SSD or NVMe). +
+For high-performance clusters, especially for large-scale + deployments, ensure high-bandwidth network connectivity between + nodes. +
+For more detailed hardware recommendations, refer to + the official + Milvus + and + prerequisite + Docker documentation. +
+SUSE Observability requirements
+ + +At least 3 nodes. +
+A minimum of 32 GB of RAM. +
+At least 16 CPU cores. +
+At least 5 GB of storage, preferably SSD. +
+For more detailed recommendations, refer to the following official + documentation: +
+SUSE Observability + requirements for SUSE Rancher Prime +
+Ollama requirements
+ ++ ++The version of Ollama provided with SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability is optimized for + NVIDIA GPU hardware. This section guides you through the steps for + configuring Ollama on an NVIDIA-enabled system, including necessary + configurations for both the hardware and software. +
+
| General recommendations | |
|---|---|
+
|
The recommended GPU models include Tesla, A100, V100, RTX 30 + series, or other compatible NVIDIA GPUs. +
+Ensure that the CUDA Compute Capability of your GPU is + compatible with the required version of Ollama. +
+At least 16 GB of RAM is recommended. However, higher amounts + (32 GB or more) may be necessary for larger models or + workloads. +
+At least 50 GB of free disk space is recommended for storing + the container images and any data files processed by Ollama. +
+You must install nvidia-docker (the NVIDIA + Container Toolkit) to allow Docker containers to use the GPU. + Refer to + + for more details. +
+You must install the CUDA version supported by your GPU model. For + most recent GPUs, CUDA 11.0 or later is required. Refer to + CUDA + Toolkit installation guide for more details. +
+Install the NVIDIA driver compatible with your GPU model. Its + version must be compatible with the installed CUDA toolkit. +
+|
You can check your GPU driver version by running the + nvidia-smi command. + + |
Open WebUI requirements
+ ++ +
Because Open WebUI shares most dependencies with Milvus and Ollama, + follow the hardware requirements mentioned in + Milvus + requirements and + Ollama + requirements. +
+Stable network connection is essential, particularly if Open WebUI is + integrated with other services or databases. Ensure sufficient bandwidth + for Web traffic and API calls. +
+To interact with the Open WebUI interface, use standard Web browsers + such as Google Chrome, Mozilla Firefox or Microsoft Edge. +
+SUSE Rancher Prime requirements
+ + +At least 3 nodes. +
+A minimum of 32 GB of RAM. +
+At least 8 CPU cores. +
+At least 200 GB of storage, preferably SSD. +
+For more detailed recommendations, refer to the following official + documentation: +
+SUSE Rancher Prime + installation requirements +
+SUSE Rancher Prime + architecture recommendations +
+SUSE Security requirements
+ + +The following container instances run on existing cluster nodes: +
+1 Manager instance +
+3 Controller instances +
+1 Enforcer instance on each cluster node +
+2 Scanner & Updater instances +
+A minimum of 2 GB of RAM. +
+At least 2 CPU cores. +
+At least 5 GB of storage, preferably SSD. +
+For more detailed recommendations, refer to the following official + documentation: +
+SUSE Security + system requirements +
+