diff --git a/concepts/adoc_output/AI-intro-how-works.adoc b/concepts/adoc_output/AI-intro-how-works.adoc new file mode 100644 index 000000000..b8f8c5bb4 --- /dev/null +++ b/concepts/adoc_output/AI-intro-how-works.adoc @@ -0,0 +1,104 @@ +%entities; ++]>++ + +&productname; architecture + +&productname; is a cloud native solution that comprises multiple +software building blocks. These blocks include the Linux operating +system, &kube; cluster with a Web UI management layer, supportive tools +to utilize GPU capabilities, and other containerized applications that +care for monitoring and security. The &sappco; includes a collection of +AI-related applications called &ailibrary;. + +&productname; building blocks + +Linux operating system + +The underlying operating system with the optional &nvidia; driver +installed. We prefer &sls; (). If you require an immutable operating +system, &slema; is the recommended alternative. + +&kube; cluster + +&kube; cluster managed by &ranchermanager; ensuring container and +application lifecycle management. We recommend using the &rke2; () +distribution. + +&nvoperator; + +Utilizes the &nvidia; GPU computing power and capabilities for +processing AI-related tasks. + +&ssecurity; () + +For security and compliance. + +&sobservability; () + +Provides advanced performance and data monitoring. + +&sstorage; () + +Enterprise-grade storage solution. + +&svirtualization; () + +For virtualized workloads. + +&smlm; () + +For managing multiple Linux distributions. + +&sappco; () + +As a source of &helm; charts and container images for the &ailibrary; +applications. + +&ailibrary; applications + +Following is a list of AI applications that you can find in the +&sappco;. For a complete and up-to-date list, refer to . + +&certmanager; () + +An extensible X.509 certificate controller for &kube; workloads. + +&opensearch; () + +A search and analytics suite for analyzing and visualizing search data. + +&milvus; () + +A vector database built for generative AI applications with minimal +performance loss. + +&ollama; () + +A platform that simplifies the installation and management of large +language models (LLM) on local devices. + +&owui; () + +An extensible Web user interface for the &ollama; LLM runner. + +&vllm; ( + +A high-performance inference and serving engine for large language +models (LLMs). + +&mcpo; () + +The &mcp;-to-OpenAPI proxy server provided by &owui;. + +&pytorch; () + +An open source machine learning framework. + +&mlflow; () + +An open source platform to manage the machine learning lifecycle, +including experimentation, reproducibility, deployment and a central +model registry. + +Basic schema of &productname; + +An image showing a basic structure of &productname; diff --git a/concepts/adoc_output/packages_lifecycle_desktop_components.adoc b/concepts/adoc_output/packages_lifecycle_desktop_components.adoc new file mode 100644 index 000000000..ddacbc313 --- /dev/null +++ b/concepts/adoc_output/packages_lifecycle_desktop_components.adoc @@ -0,0 +1,42 @@ +%entities; ++]>++ + +Update strategy of desktop components + +In general, desktop components follow the balanced lifecycle of their +packages. + +The &slea; 16 desktop environment provides the following components: + +GNOME desktop + +The minimum version is 48.0. The exact version must be determined before +the end of beta releases. + +GStreamer, PipeWire and Flatpak + +These components are updated to the latest stable branch version at the +last beta release. + +Firefox + +The minimum version is 140.3. The update strategy follows the ESR update +lifecycle. + +WebKit + +The minimum version is 2.46. WebKit has a periodic update cycle +determined by critical CVEs. The component is updated but not +backported. + +BRLTTY + +The accessibility tools are at the latest upstream stable versions in +the 16.0. Only bug fixes will happen in this minor release. + +QT + +The Qt 6 is delivered. The initial version is Qt 6.9. + +KDE + +Not a standard &suselinux; delivery; available only in PackageHub 16. diff --git a/concepts/adoc_output/packages_lifecycle_faq.adoc b/concepts/adoc_output/packages_lifecycle_faq.adoc new file mode 100644 index 000000000..cb09f221c --- /dev/null +++ b/concepts/adoc_output/packages_lifecycle_faq.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +Package maintainer's FAQ + +The topic covers FAQ a package mantaine may have. + +I want to define a lifecycle for my package. What should I do? + +Raise a Jira ticket so we can track and document the process. If you are +not sure about the lifecycle category, contact your manager and a SLEA +architect. + +Which type of lifecycle can or should I adopt? + +The further down in the stack your component is, the more packages +depend on it, so you should take a more conservative approach. When in +doubt, contact the SLEA architects. + +I want to use a balanced lifecycle for my package. Should I go for +several versions in parallel? + +In general, it is better to replace the old package with a new version. +It creates less overhead on the maintainer side and is less confusing to +customers. A sliding window is useful for most toolchain components that +have API or ABI changes as part of new releases. However, the particular +approach depends on the amount and type of changes. For example, if the +changes can be adopted easily or even automated, then it is better to +just update the package. When changes lead to conflicts that neither we +nor the customer can easily resolve, then maintaining parallel versions +is a better option. + +SUSE tried this on a case-by-case basis for Python modules. + +Should I convert my package from a balanced lifecycle to an agile one, +and release updated versions also to code streams under LTS? + +This is a question that we need to investigate in every case. In +general, if the particular package is beneficial for customers on the +older releases, then it may be worth releasing the updated packages to +older releases. For example, if a component is mostly about operations, +then it may be useful for customers on older releases. On the contrary, +newer packages that provide hardware enablement are usually not needed, +as customers' hardware has not changed since the time of installation. diff --git a/concepts/adoc_output/packages_lifecycle_python.adoc b/concepts/adoc_output/packages_lifecycle_python.adoc new file mode 100644 index 000000000..8cad1a349 --- /dev/null +++ b/concepts/adoc_output/packages_lifecycle_python.adoc @@ -0,0 +1,56 @@ +%entities; ++]>++ + +Python update strategy + +The primary Python interpreter with its stack is 3.13 in &sle; +&productnumber; + +Python interpreter and stack support lifecycles + +The support lifecycles of the Python interpreter are the following: + +Short-term support interpreter + +It is supported for 2 years. The support lifecycle applies also to basic +packages: setuptools, venv, pip, wheel and pipx. + +Primary Python + +The OS's main Python. It is supported for 4 years and includes the +interpreter and its stack, /usr/bin/python3 and all packages with the +python3- prefix. + +Legacy Python + +It is supported for 4 years and includes the interpreter and stack but +not the python3- packages. A legacy interpreter is a former primary +Python. Short-term interpreters, by contrast, follow a different +lifecycle and are becoming legacy. + +Python release cycle + +The starting version of Python in &suselinux; 16 is Python 3.13. &suse; +plans to release every odd version of Python and provide the LTS for up +to 8 years. + +The latest Python interpreter is delivered with each minor release with +only short-term support. The outdated Python version is migrated to the +legacy mode and is supported for another 4 years. + +The following table shows probable versions of Python to be delivered +with particular &suselinux; versions. The exact version will depend on +the upstream community. + +Python versions delivered per &suselinux; minor release + +Python version &suselinux; minor release 3.13 3.14 3.15 3.16 3.17 3.18 +3.19 + +&suselinux; 16.0 The primary Python stack &suselinux; 16.1 The primary +Python stack As the short-term support interpreter &suselinux; 16.2 The +primary Python stack As the short-term support interpreter &suselinux; +16.3 The legacy Python The primary Python As the short-term support +interpreter &suselinux; 16.4 The primary Python As the short-term +support interpreter &suselinux; 16.5 The legacy Python The primary +Python As the short-term support interpreter &suselinux; 16.6 The +primary Python As the short-term support interpreter diff --git a/concepts/adoc_output/packages_lifecycle_toolchain.adoc b/concepts/adoc_output/packages_lifecycle_toolchain.adoc new file mode 100644 index 000000000..8cad1a349 --- /dev/null +++ b/concepts/adoc_output/packages_lifecycle_toolchain.adoc @@ -0,0 +1,56 @@ +%entities; ++]>++ + +Python update strategy + +The primary Python interpreter with its stack is 3.13 in &sle; +&productnumber; + +Python interpreter and stack support lifecycles + +The support lifecycles of the Python interpreter are the following: + +Short-term support interpreter + +It is supported for 2 years. The support lifecycle applies also to basic +packages: setuptools, venv, pip, wheel and pipx. + +Primary Python + +The OS's main Python. It is supported for 4 years and includes the +interpreter and its stack, /usr/bin/python3 and all packages with the +python3- prefix. + +Legacy Python + +It is supported for 4 years and includes the interpreter and stack but +not the python3- packages. A legacy interpreter is a former primary +Python. Short-term interpreters, by contrast, follow a different +lifecycle and are becoming legacy. + +Python release cycle + +The starting version of Python in &suselinux; 16 is Python 3.13. &suse; +plans to release every odd version of Python and provide the LTS for up +to 8 years. + +The latest Python interpreter is delivered with each minor release with +only short-term support. The outdated Python version is migrated to the +legacy mode and is supported for another 4 years. + +The following table shows probable versions of Python to be delivered +with particular &suselinux; versions. The exact version will depend on +the upstream community. + +Python versions delivered per &suselinux; minor release + +Python version &suselinux; minor release 3.13 3.14 3.15 3.16 3.17 3.18 +3.19 + +&suselinux; 16.0 The primary Python stack &suselinux; 16.1 The primary +Python stack As the short-term support interpreter &suselinux; 16.2 The +primary Python stack As the short-term support interpreter &suselinux; +16.3 The legacy Python The primary Python As the short-term support +interpreter &suselinux; 16.4 The primary Python As the short-term +support interpreter &suselinux; 16.5 The legacy Python The primary +Python As the short-term support interpreter &suselinux; 16.6 The +primary Python As the short-term support interpreter diff --git a/concepts/adoc_output/packages_lifecycle_types.adoc b/concepts/adoc_output/packages_lifecycle_types.adoc new file mode 100644 index 000000000..8cad1a349 --- /dev/null +++ b/concepts/adoc_output/packages_lifecycle_types.adoc @@ -0,0 +1,56 @@ +%entities; ++]>++ + +Python update strategy + +The primary Python interpreter with its stack is 3.13 in &sle; +&productnumber; + +Python interpreter and stack support lifecycles + +The support lifecycles of the Python interpreter are the following: + +Short-term support interpreter + +It is supported for 2 years. The support lifecycle applies also to basic +packages: setuptools, venv, pip, wheel and pipx. + +Primary Python + +The OS's main Python. It is supported for 4 years and includes the +interpreter and its stack, /usr/bin/python3 and all packages with the +python3- prefix. + +Legacy Python + +It is supported for 4 years and includes the interpreter and stack but +not the python3- packages. A legacy interpreter is a former primary +Python. Short-term interpreters, by contrast, follow a different +lifecycle and are becoming legacy. + +Python release cycle + +The starting version of Python in &suselinux; 16 is Python 3.13. &suse; +plans to release every odd version of Python and provide the LTS for up +to 8 years. + +The latest Python interpreter is delivered with each minor release with +only short-term support. The outdated Python version is migrated to the +legacy mode and is supported for another 4 years. + +The following table shows probable versions of Python to be delivered +with particular &suselinux; versions. The exact version will depend on +the upstream community. + +Python versions delivered per &suselinux; minor release + +Python version &suselinux; minor release 3.13 3.14 3.15 3.16 3.17 3.18 +3.19 + +&suselinux; 16.0 The primary Python stack &suselinux; 16.1 The primary +Python stack As the short-term support interpreter &suselinux; 16.2 The +primary Python stack As the short-term support interpreter &suselinux; +16.3 The legacy Python The primary Python As the short-term support +interpreter &suselinux; 16.4 The primary Python As the short-term +support interpreter &suselinux; 16.5 The legacy Python The primary +Python As the short-term support interpreter &suselinux; 16.6 The +primary Python As the short-term support interpreter diff --git a/concepts/adoc_output/packages_lifecycle_valkey.adoc b/concepts/adoc_output/packages_lifecycle_valkey.adoc new file mode 100644 index 000000000..8cad1a349 --- /dev/null +++ b/concepts/adoc_output/packages_lifecycle_valkey.adoc @@ -0,0 +1,56 @@ +%entities; ++]>++ + +Python update strategy + +The primary Python interpreter with its stack is 3.13 in &sle; +&productnumber; + +Python interpreter and stack support lifecycles + +The support lifecycles of the Python interpreter are the following: + +Short-term support interpreter + +It is supported for 2 years. The support lifecycle applies also to basic +packages: setuptools, venv, pip, wheel and pipx. + +Primary Python + +The OS's main Python. It is supported for 4 years and includes the +interpreter and its stack, /usr/bin/python3 and all packages with the +python3- prefix. + +Legacy Python + +It is supported for 4 years and includes the interpreter and stack but +not the python3- packages. A legacy interpreter is a former primary +Python. Short-term interpreters, by contrast, follow a different +lifecycle and are becoming legacy. + +Python release cycle + +The starting version of Python in &suselinux; 16 is Python 3.13. &suse; +plans to release every odd version of Python and provide the LTS for up +to 8 years. + +The latest Python interpreter is delivered with each minor release with +only short-term support. The outdated Python version is migrated to the +legacy mode and is supported for another 4 years. + +The following table shows probable versions of Python to be delivered +with particular &suselinux; versions. The exact version will depend on +the upstream community. + +Python versions delivered per &suselinux; minor release + +Python version &suselinux; minor release 3.13 3.14 3.15 3.16 3.17 3.18 +3.19 + +&suselinux; 16.0 The primary Python stack &suselinux; 16.1 The primary +Python stack As the short-term support interpreter &suselinux; 16.2 The +primary Python stack As the short-term support interpreter &suselinux; +16.3 The legacy Python The primary Python As the short-term support +interpreter &suselinux; 16.4 The primary Python As the short-term +support interpreter &suselinux; 16.5 The legacy Python The primary +Python As the short-term support interpreter &suselinux; 16.6 The +primary Python As the short-term support interpreter diff --git a/concepts/html_output/AI-intro-how-works.html b/concepts/html_output/AI-intro-how-works.html new file mode 100644 index 000000000..26aac30cf --- /dev/null +++ b/concepts/html_output/AI-intro-how-works.html @@ -0,0 +1,419 @@ +

SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability architecture

+ +

+

SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability is a cloud native solution that comprises multiple + software building blocks. These blocks include the Linux operating + system, Kubernetes cluster with a Web UI management layer, supportive tools + to utilize GPU capabilities, and other containerized applications that + care for monitoring and security. The SUSE Application Collection includes a collection of + AI-related applications called AI Library. +

+

+
+

SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability building blocks

Linux operating system

The underlying operating system with the optional NVIDIA driver + installed. We prefer SUSE Linux Enterprise Server + (). If you + require an immutable operating system, SLE Micro is the recommended + alternative. +

+
Kubernetes cluster

Kubernetes cluster managed by SUSE Rancher Prime ensuring container and + application lifecycle management. We recommend using the SUSE Rancher Prime: RKE2 + () + distribution. +

+
NVIDIA GPU Operator

Utilizes the NVIDIA GPU computing power and capabilities for + processing AI-related tasks. +

+
SUSE Security ()

For security and compliance. +

+
SUSE Observability ()

Provides advanced performance and data monitoring. +

+
SUSE Storage ()

Enterprise-grade storage solution. +

+
SUSE Virtualization ()

For virtualized workloads. +

+
SUSE Multi-Linux Manager ()

For managing multiple Linux distributions. +

+
SUSE Application Collection ()

As a source of Helm charts and container images for the AI Library + applications. +

+
+

AI Library applications

cert-manager ()

An extensible X.509 certificate controller for Kubernetes workloads. +

+
OpenSearch ()

A search and analytics suite for analyzing and visualizing search data. +

+
Milvus ()

A vector database built for generative AI applications with minimal + performance loss. +

+
Ollama ()

A platform that simplifies the installation and management of large + language models (LLM) on local devices. +

+
Open WebUI ()

An extensible Web user interface for the Ollama LLM runner. +

+
vLLM (

A high-performance inference and serving engine for large language + models (LLMs). +

+
mcpo ()

The MCP-to-OpenAPI proxy server provided by Open WebUI. +

+
PyTorch ()

An open source machine learning framework. +

+
MLflow ()

An open source platform to manage the machine learning lifecycle, + including experimentation, reproducibility, deployment and a central + model registry. +

+
+

Figure 1. Basic schema of SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability

+ +

+

+
+
\ No newline at end of file diff --git a/concepts/html_output/packages_lifecycle_desktop_components.html b/concepts/html_output/packages_lifecycle_desktop_components.html new file mode 100644 index 000000000..62b106acf --- /dev/null +++ b/concepts/html_output/packages_lifecycle_desktop_components.html @@ -0,0 +1,126 @@ +

Update strategy of desktop components

+ + +

+

In general, desktop components follow the balanced lifecycle of their packages. +

+

+
+

The SLE 16 desktop environment provides the following components: +

+

GNOME desktop

The minimum version is 48.0. The exact version must be determined before the +end of beta releases. +

+
GStreamer, PipeWire and Flatpak

These components are updated to the latest stable branch version at the last beta release. +

+
Firefox

The minimum version is 140.3. The update strategy follows the ESR update lifecycle. +

+
WebKit

The minimum version is 2.46. WebKit has a periodic update cycle + determined by critical CVEs. The component is updated but not backported. +

+
BRLTTY

The accessibility tools are at the latest upstream stable versions in the 16.0. Only bug fixes will +happen in this minor release. +

+
QT

The Qt 6 is delivered. The initial version is Qt 6.9. +

+
KDE

Not a standard SUSE Linux delivery; available only in PackageHub 16. +

+
+
\ No newline at end of file diff --git a/concepts/html_output/packages_lifecycle_faq.html b/concepts/html_output/packages_lifecycle_faq.html new file mode 100644 index 000000000..68dbb61e8 --- /dev/null +++ b/concepts/html_output/packages_lifecycle_faq.html @@ -0,0 +1,212 @@ +

Package maintainer's FAQ

+ + +

+

The topic covers FAQ a package mantaine may have. +

+

+
+
1.
+ + +
2.
+ + +
3.
+ + +
4.
+ +
+

1.

I want to define a lifecycle for my package. What should I do?

+
+

Raise a Jira ticket so we can track and document the process. If you are not sure about + the lifecycle category, contact your manager and a SLEA architect. +

+
+
+

2.

Which type of lifecycle can or should I adopt?

+
+

The further down in the stack your component is, the more packages depend on + it, so you should take a more conservative approach. When in doubt, contact the + SLEA architects. +

+
+
+

3.

I want to use a balanced lifecycle for my package. Should I go for several versions in parallel?

+
+

In general, it is better to replace the old package with a new version. It creates less + overhead on the maintainer side and is less confusing to customers. A sliding window + is useful for most toolchain components that have API or ABI changes as part of + new releases. However, the particular approach depends on the amount and type of changes. + For example, if the changes can be adopted easily or even automated, then it is better to + just update the package. When changes lead to conflicts that neither we nor the customer can easily resolve, then maintaining parallel versions is a better option. +

+

SUSE tried this on a case-by-case basis for Python modules. +

+
+
+

4.

Should I convert my package from a balanced lifecycle to an agile one, and release updated versions also to code streams under LTS?

+
+

This is a question that we need to investigate in every case. In general, if + the particular package is beneficial for customers on the older releases, then it may be + worth releasing the updated packages to older releases. For example, if a component is + mostly about operations, then it may be useful for customers on older releases. On + the contrary, newer packages that provide hardware enablement are usually not needed, as + customers' hardware has not changed since the time of installation. +

+
+
+
+
\ No newline at end of file diff --git a/concepts/html_output/packages_lifecycle_python.html b/concepts/html_output/packages_lifecycle_python.html new file mode 100644 index 000000000..a0368e677 --- /dev/null +++ b/concepts/html_output/packages_lifecycle_python.html @@ -0,0 +1,253 @@ +

Python update strategy

+ + +

+

The primary Python interpreter with its stack is 3.13 in SUSE Linux Enterprise 1.05.45.56.216.016.016.0 +

+

+
+

1. Python interpreter and stack support lifecycles

+ +

The support lifecycles of the Python interpreter are the following: +

+

Short-term support interpreter

It is supported for 2 years. The support lifecycle applies also to basic packages: + setuptools, venv, pip, wheel and pipx. +

+
Primary Python

The OS's main Python. It is supported for 4 years and includes the interpreter and its stack, + /usr/bin/python3 and all packages with the python3- + prefix. +

+
Legacy Python

It is supported for 4 years and includes the interpreter and stack but not the +python3- packages. A legacy interpreter is a former primary Python. Short-term interpreters, by contrast, follow a different lifecycle and are becoming legacy. +

+
+
+

2. Python release cycle

+ +

The starting version of Python in SUSE Linux 16 is Python 3.13. SUSE plans to release every + odd version of Python and provide the LTS for up to 8 years. +

+

The latest Python interpreter is delivered with each minor release with only short-term support. The + outdated Python version is migrated to the legacy mode and is supported for another 4 years. +

+

The following table shows probable versions of Python to be delivered with particular + SUSE Linux versions. The exact version will depend on the upstream community. +

+

Table 1. Python versions delivered per SUSE Linux minor release

+ + + + + + + + + + + + +
+
+
+ +
\ No newline at end of file diff --git a/concepts/html_output/packages_lifecycle_toolchain.html b/concepts/html_output/packages_lifecycle_toolchain.html new file mode 100644 index 000000000..18c1bac2e --- /dev/null +++ b/concepts/html_output/packages_lifecycle_toolchain.html @@ -0,0 +1,485 @@ +

Update strategy of toolchain components

+ + +

+

The toolchain components include the following tools: the GNU C library, the GCC and G++ + compilers, binutils, GDB and LLVM. Each of the component update + strategies is described in corresponding sections. +

+

+
+

1. GNU C library (glibc)

+ +

The initial glibc version is 2.40. The package is + updated with each minor SUSE Linux release if there are reasons for changes to the package (for + example feature requests, performance tuning and so on). +

+

Package updates provide backward compatibility for dynamic linking, allowing programs built + on previous SUSE Linux 16 releases to run. On the contrary, symbols deprecated in the + upstream glibc version will not be declared for the compiler and not + available for link editing (static linking). Such cases, when source level and static linking + backward compatibility is not guaranteed, are properly documented. +

+
+ +

2. Compiler for user space applications and libraries

+ +

Developers of user-space applications can use the supported GNU Compiler Collection (GCC) C + and C++ built-in compilers. Compilers for other languages, cross-compilers and accelerator + offloading compilers are not available on SUSE Linux from standard repositories, but developers + can install them from Package Hub with community support. +

+

The initial major version in SUSE Linux 16.0 and 16.1 is GCC 15. Later SUSE Linux releases will introduce the + tick-tock model: +

+

+

2.1. Supported compiler flags

+ +

Any combination of the following compiler flags is supported: +

+

  • -O0, -O2 and -O3 +

    +
  • -ffast-math +

    +
  • -flto +

    +
  • -fpie and -fno-pie +

    +
  • -fPIC +

    +
  • -g +

    +
+

+ The following options are also supported on AMD64/Intel 64: +

+

  • -march=x86-64-v2 (the + default one) +

    +
  • -march=x86-64-v3 +

    +
  • -march=x86-64-v4 +

    +
+

Other compiler flags are not supported by SUSE. However, we can assist in reporting issues + to the upstream GCC project. +

+
+ +

2.2. Supported language versions

+ +

For the C language, the most recent supported version is ISO/IEC 9899:2024 (known as C23) + with GNU extensions. +

+

For C++, the most recent supported version is ISO/IEC 14882:2017 (known as C++17) with GNU extensions. +

+

SUSE also provides unsupported packages with compilers for Fortran, Ada and Go (gcc-go). +

+
+
+

3. The kernel module compiler

+ +

To build a kernel module, you need the same compiler version used to build the + kernel. Therefore, SUSE provides the same GCC version—initially GCC 13.N. +

+

+ The compiler for kernel modules is not intended for general use. The kernel module compiler + may also be dropped in a future minor release of SUSE Linux. +

+
+

4. The build compiler

+ +

There is a default compiler that is used internally to build SUSE Linux 16 packages in the version GCC +13. The build compiler is provided as an unsupported package in PackageHub. +

+

Package maintainers can use the newer GCC available in the internal build service, but in this + case, they must be aware of possible ABI issues (for example, avoid linking code written in different C++ standards). +

+
+ +

5. GCC and C++ runtime libraries

+ +

GCC runtime libraries (libgcc, libstdc++) are updated to the +versions of a new major version of GCC on a yearly basis in all SUSE Linux minor releases under LTS. +The runtime libraries are updated during maintenance updates. +

+

The runtime libraries are fully supported during the general support and LTS of each SUSE Linux + minor version. +

+

When security incidents may happen, these runtime libraries are the only part of the + toolchain that can be used for that matter. +

+
+ +

6. The GNU Binutils

+ +

The GNU Binutils are upgraded to the latest upstream version in all SUSE Linux 16 minor releases + under general support of LTS. +

+
+ +

7. The GNU project debugger

+ +

GDB is updated to the newest major version on all SUSE Linux 16 minor releases under general support + or LTS. However, this means that certain functionality may be removed from the package. +

+
+ +

8. LLVM

+ +

LLVM is available for use exclusively with MESA. Any other use of LLVM is not supported. Front-ends like Clang are not provided on SUSE Linux. You + may get them only from a community-supported repository. +

+
+ +

9. Compatibility and deprecation policy

+ +

SUSE maintains backward dynamic-linking compatibility for glibc and the C++ + compiler runtime library. This means that a binary built on an earlier minor version of SUSE Linux + 16 runs correctly on a later minor release. +

+

Features deprecated by upstream are removed either from newer major versions of compilers or + from all toolchain components in later SUSE Linux minor versions. In the case of + glibc, the deprecated symbols are then removed from header files and are + not available for link editing. Therefore, the code using these symbols will no longer compile or (statically) link. +

+
+ + +

10. Security considerations

+ +

Development tools like the compiler are not hardened to process untrusted input. In contrast, + the GCC and C++ runtime libraries are the only parts of the toolchain that are hardened for this purpose. +

+
+ +
\ No newline at end of file diff --git a/concepts/html_output/packages_lifecycle_types.html b/concepts/html_output/packages_lifecycle_types.html new file mode 100644 index 000000000..a22e663fb --- /dev/null +++ b/concepts/html_output/packages_lifecycle_types.html @@ -0,0 +1,392 @@ +

Types of package life cycles

+ + +

+

On SUSE Linux 16, component packages are sorted into different lifecycle categories. This section describes + the criteria for such a sorting. +

+

+
+

When categorizing a package, the impact of its changes on the system is considered. To estimate this impact, the interfaces of the component must first be identified. For example, in the case of a shared library, changes to its API or ABI may disrupt the system. For a compiler or interpreter, disruptive changes may involve supported languages, command-line options, or the performance of the compiled code. By contrast, a minor backward compatible change may have little to no impact. +

+

In general, a package falls exactly into one of the categories: stable, balanced, or agile. However, some technologies may have + packages sorted into several categories, for example, Python. The following sections + describe the package categories in detail. +

+

1. Stable

+ +

Packages that are marked as stable* (also called conservative) are those that do not deliver a disruptive change + while a customer is on any of the 16 minor versions. During the upgrade to another minor + version, the package version may change but the newer version does not introduce incompatible behavior. Customers expect to have LTS on + these packages. +

+

The packages belonging to this category can change, but the following criteria apply: +

+

+

Under exceptional circumstances like serious security issues, the package can be updated even + at the cost of bringing disruptive changes. Alternatively, if a new version of a package + contains disruptive changes, this version can be delivered as an alternative to the previous one. +

+

A typical example of a stable package is util-linux. Another is + glibc, which remains backward compatible except for symbols deprecated upstream. +

+
+

2. Balanced

+ +

Packages categorized as balanced are changing (driven by upstream evolution and customer demands), but should not cause disruptive changes + within one minor version. A few incompatible changes are possible, but always + documented in the release notes for that particular minor release. +

+

Customers expect a moderate number of changes during the upgrade from a minor release. + However, the transition must be smooth, either by getting back to the original behavior or by + providing the older version in parallel with the new one. +

+

When change is being introduced, it can be in one of the following ways: +

+

+

Such versions are supported at least until the end of the LTS of the minor release that introduced them. +

+

To help with incorporating changes in a conservative environment, the tick-tock model can be + used. For example, we could mark even-numbered minor releases as tick + releases and odd-numbered minor releases as tock. These tock releases could still introduce version updates in packages with a balanced lifecycle, but these updates remain fully backward compatible in all relevant aspects. +

+

For ISVs, a stable runtime environment is critical. Therefore, not to break third-party + applications, in the case of shared libraries where SUSE provides the corresponding + -devel package, the older .so version is not deprecated + immediately. For example, for a package called foo, there are packages: + libfoo-0_1, foo-devel and foo-utils. + If the package is updated and the shared library version changes to + libfoo-0_2, libfoo-0_1 is not removed. +

+ +

Typical examples of balanced packages include the following: +

+

+
+

3. Agile

+ +

Packages categorized as agile are up to date with upstream even though they may bring + incompatible changes to the system. +

+

Updates to these packages are done in two possible ways: +

+

+

All new package versions are released simultaneously to all minor releases under general support and generally also in LTS. +

+

The following packages are categorized as agile: +

+

+
+
\ No newline at end of file diff --git a/concepts/html_output/packages_lifecycle_valkey.html b/concepts/html_output/packages_lifecycle_valkey.html new file mode 100644 index 000000000..8a83895ad --- /dev/null +++ b/concepts/html_output/packages_lifecycle_valkey.html @@ -0,0 +1,83 @@ +

Valkey update strategy

+ + +

+

Valkey is a high-performance data structure server that primarily serves key/value workloads. It supports a wide range of native structures and an extensible plug-in system for adding new data structures and access patterns. +

+

+
+

On SUSE Linux Enterprise 16, Valkey is updated and supported according to the following rules: +

+

+
\ No newline at end of file diff --git a/references/adoc_output/AI-requirements-hardware-gpu.adoc b/references/adoc_output/AI-requirements-hardware-gpu.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-hardware-gpu.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/adoc_output/AI-requirements-hardware.adoc b/references/adoc_output/AI-requirements-hardware.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-hardware.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/adoc_output/AI-requirements-milvus.adoc b/references/adoc_output/AI-requirements-milvus.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-milvus.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/adoc_output/AI-requirements-observability.adoc b/references/adoc_output/AI-requirements-observability.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-observability.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/adoc_output/AI-requirements-ollama.adoc b/references/adoc_output/AI-requirements-ollama.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-ollama.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/adoc_output/AI-requirements-openwebui.adoc b/references/adoc_output/AI-requirements-openwebui.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-openwebui.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/adoc_output/AI-requirements-rancher.adoc b/references/adoc_output/AI-requirements-rancher.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-rancher.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/adoc_output/AI-requirements-security.adoc b/references/adoc_output/AI-requirements-security.adoc new file mode 100644 index 000000000..6931e4c39 --- /dev/null +++ b/references/adoc_output/AI-requirements-security.adoc @@ -0,0 +1,43 @@ +%entities; ++]>++ + +GPU hardware for AI/ML workloads + +To run AI/ML workloads, such as training machine learning models or +running inference workloads, deploy cluster nodes with compatible +&nvidia; GPUs to gain acceleration. + +Using the &nvoperator; + +Configuring and managing nodes with hardware resources can require +multiple configurations for software components. These include drivers, +container runtimes and libraries. To use &nvidia; GPUs in a &kube; +cluster, you need to configure the &nvoperator;. Because GPU is a +special resource in the cluster, you need to install the following +components to enable deployment of workloads for processing on the GPU. + +&nvidia; drivers (to enable CUDA) + +&kube; device plug-in + +Container runtime + +Other tools to provide capabilities such as monitoring or automatic node +labeling + +To ensure that the &nvoperator; is installed correctly, the &kube; +cluster must meet the following prerequisites: + +All worker nodes must run the same operating system version to use the +&nvidia; GPU Driver container. + +Nodes must be configured with a container engine, such as &docker; +(CE/EE), &containerd; or &podman;. + +Nodes should be equipped with &nvidia; GPUs. + +Nodes should have &nvidia; drivers installed. + +Supported GPUs + +The &nvoperator; is compatible with a range of &nvidia; GPUs. For a full +list of supported GPUs, refer to &nvoperator; Platform Support. diff --git a/references/html_output/AI-requirements-hardware-gpu.html b/references/html_output/AI-requirements-hardware-gpu.html new file mode 100644 index 000000000..c601f1974 --- /dev/null +++ b/references/html_output/AI-requirements-hardware-gpu.html @@ -0,0 +1,144 @@ +

GPU hardware for AI/ML workloads

+ +

+

To run AI/ML workloads, such as training machine learning models or + running inference workloads, deploy cluster nodes with compatible + NVIDIA GPUs to gain acceleration. +

+

+
+

1. Using the NVIDIA GPU Operator

+ +

Configuring and managing nodes with hardware resources can require + multiple configurations for software components. These include drivers, + container runtimes and libraries. To use NVIDIA GPUs in a Kubernetes + cluster, you need to configure the NVIDIA GPU Operator. Because GPU is a special + resource in the cluster, you need to install the following components to + enable deployment of workloads for processing on the GPU. +

+

+

To ensure that the NVIDIA GPU Operator is installed correctly, the Kubernetes cluster + must meet the following prerequisites: +

+

+
+

2. Supported GPUs

+ +

The NVIDIA GPU Operator is compatible with a range of NVIDIA GPUs. For a full + list of supported GPUs, refer to + NVIDIA GPU Operator + Platform Support. +

+
+
\ No newline at end of file diff --git a/references/html_output/AI-requirements-hardware.html b/references/html_output/AI-requirements-hardware.html new file mode 100644 index 000000000..234071d92 --- /dev/null +++ b/references/html_output/AI-requirements-hardware.html @@ -0,0 +1,367 @@ +

SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability hardware requirements

+ +

+

For successful deployment and operation, SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability has the same + hardware prerequisites as an + SUSE Rancher Prime: RKE2 + cluster. For requirements of individual applications, refer to + . +

+

+
+

1. Recommended hardware (basic functionality)

+ +

RAM

At least 32 GB of RAM per node. This is the minimum + recommendation for the control plane node. Additional resources may + be needed for the worker nodes based on workload. +

+
CPU

A multicore processor with a minimum of 4 cores. 8 cores or more may + be necessary depending on the cluster scale and application demands. +

+
Disk space

  • 50 GB or more is recommended for control plane nodes. +

    +
  • Additional space for data storage, such as application data or + log files, is required depending on the deployment scale and the + workloads running on the cluster. +

    +
  • SSDs or high-speed storage are preferred for faster data access + and efficient operation of containerized workloads. +

    +
+
Networking

  • A reliable and stable network connection between all nodes in + the cluster. +

    +
  • Cluster nodes must have valid DNS A records following the + *.apps.CLUSTER_DOMAIN pattern. The + nodes must be able to communicate with each other and access + external resources, such as container images or software + updates. +

    +
  • Ensure that all nodes have public IP addresses or are accessible + via VPN or other private network if deploying across multiple + data centers. +

    +
+
+
+

2. Recommended hardware (for High Availability)

+ +

While 32 GB of RAM is the minimum for basic functionality, a + production-grade deployment with high availability, multi-node clusters, + or running resource-intensive applications like AI/ML workloads might + require more. +

+
+

RAM

64 GB or more per node is recommended for larger clusters or to + run applications with high resource demands. +

+
CPU

At least 8 cores, ideally 16 or more cores, depending on the + expected load. +

+
Disk space

  • For larger-scale clusters or persistent storage applications, + 100 GB or more of disk space per node might be required. +

    +
  • Using high-performance SSDs is recommended, especially for + workloads with high I/O requirements, such as databases or AI/ML + model training. +

    +
+
Networking

Ensure a low-latency, high-throughput network for efficient + communication between nodes, especially if deploying in multi-region + or multi-cloud environments. +

+
+

For more detailed hardware recommendations, refer to the official SUSE Rancher Prime: RKE2 + installation requirements documentation at + . +

+
+
+
\ No newline at end of file diff --git a/references/html_output/AI-requirements-milvus.html b/references/html_output/AI-requirements-milvus.html new file mode 100644 index 000000000..f344d0439 --- /dev/null +++ b/references/html_output/AI-requirements-milvus.html @@ -0,0 +1,336 @@ +

Milvus requirements

+ +

+

This topic describes requirements for the Milvus application. +

+

+
+

1. Hardware requirements

+ +

1.1. Minimum requirements

+ +

The following requirements are for basic Milvus deployment on a single + node or a small scale. +

+

RAM

A minimum of 32 GB of RAM. +

+
CPU

At least 8 CPU cores. +

+
Disk space

At least 100 GB of storage, preferably SSD. +

+
Networking

A stable connection with 1 Gbps network bandwidth. +

+
+
+

1.2. Recommended hardware for large-scale workloads

+ +

The following requirements are for multi-node Milvus clusters or heavy + workloads, such as large vector databases. +

+

RAM

A minimum of 64 GB of RAM per node. +

+
CPU

8–16 CPU cores per node or more. +

+
Disk space

500 GB or more of high-speed storage, ideally SSD or NVMe + SSD. +

+
Networking

10 Gbps Ethernet or faster for high-performance clusters. +

+
+
+

1.3. CPU instruction set requirements

+ +

The following CPU instruction sets are required for Milvus: +

+

  • SSE4.2 +

    +
  • AVX +

    +
  • AVX2 +

    +
  • AVX-512 +

    +
+
+
+

2. Software requirements

+ +

Running Milvus requires specific versions of the following software: +

+

Kubernetes

SUSE-supported versions of SUSE Rancher Prime: RKE2 that use Kubernetes 1.18 or higher. +

+
Helm

The recommended version is 3.5.0 or later. +

+
+
+

3. Additional considerations

+ +

Disk and storage

  • Storage type: SSDs or NVMe SSDs + are highly recommended for fast read/write access to large + datasets and high-performance vector retrieval. +

    +
  • Metadata and data storage: For + large-scale deployments, ensure that metadata and vector data are + stored on fast disks (SSD or NVMe). +

    +
+
Network

For high-performance clusters, especially for large-scale + deployments, ensure high-bandwidth network connectivity between + nodes. +

+
+
+

4. For more information

+ +

For more detailed hardware recommendations, refer to + the official + Milvus + and + prerequisite + Docker documentation. +

+
+
\ No newline at end of file diff --git a/references/html_output/AI-requirements-observability.html b/references/html_output/AI-requirements-observability.html new file mode 100644 index 000000000..faba3fd44 --- /dev/null +++ b/references/html_output/AI-requirements-observability.html @@ -0,0 +1,98 @@ +

SUSE Observability requirements

+ +
+

1. Minimum hardware requirements

+ +

Nodes for HA setup

At least 3 nodes. +

+
RAM

A minimum of 32 GB of RAM. +

+
CPU

At least 16 CPU cores. +

+
Disk space

At least 5 GB of storage, preferably SSD. +

+
+
+

2. For more information

+ +

For more detailed recommendations, refer to the following official + documentation: +

+

+
+
\ No newline at end of file diff --git a/references/html_output/AI-requirements-ollama.html b/references/html_output/AI-requirements-ollama.html new file mode 100644 index 000000000..14bf29db5 --- /dev/null +++ b/references/html_output/AI-requirements-ollama.html @@ -0,0 +1,352 @@ +

Ollama requirements

+ +

+

The version of Ollama provided with SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability is optimized for + NVIDIA GPU hardware. This section guides you through the steps for + configuring Ollama on an NVIDIA-enabled system, including necessary + configurations for both the hardware and software. +

+

+
+

General recommendations
  +

  • Run Ollama on NVIDIA GPU nodes. + Since Ollama is GPU-optimized, using the power of NVIDIA GPUs is + essential for maximum performance. This ensures that the application + runs efficiently and fully uses the hardware capabilities. +

    +
  • Assign applications to specific + nodes. SUSE AISUSE Linux MicroSUSE Linux MicroSUSE Linux Enterprise ServerSUSE Linux Enterprise Server for SAP applicationsSUSE Linux Enterprise High Availability provides a mechanism to assign + applications, such as Ollama, to specific nodes. For more details, + refer to + . +

    +
+
+

1. Hardware requirements

+ +

NVIDIA GPU

  • The recommended GPU models include Tesla, A100, V100, RTX 30 + series, or other compatible NVIDIA GPUs. +

    +
  • Ensure that the CUDA Compute Capability of your GPU is + compatible with the required version of Ollama. +

    +
+
RAM

At least 16 GB of RAM is recommended. However, higher amounts + (32 GB or more) may be necessary for larger models or + workloads. +

+
Disk space

At least 50 GB of free disk space is recommended for storing + the container images and any data files processed by Ollama. +

+
+
+

2. Software requirements

+ +

NVIDIA Docker (nvidia-docker)

  • You must install nvidia-docker (the NVIDIA + Container Toolkit) to allow Docker containers to use the GPU. + Refer to + + for more details. +

    +
+
CUDA Toolkit

You must install the CUDA version supported by your GPU model. For + most recent GPUs, CUDA 11.0 or later is required. Refer to + CUDA + Toolkit installation guide for more details. +

+
NVIDIA driver

Install the NVIDIA driver compatible with your GPU model. Its + version must be compatible with the installed CUDA toolkit. +

+

You can check your GPU driver version by running the + nvidia-smi command. +

+
+
+
+
\ No newline at end of file diff --git a/references/html_output/AI-requirements-openwebui.html b/references/html_output/AI-requirements-openwebui.html new file mode 100644 index 000000000..314359616 --- /dev/null +++ b/references/html_output/AI-requirements-openwebui.html @@ -0,0 +1,69 @@ +

Open WebUI requirements

+ +

+

While Open WebUI has no specific hardware dependencies beyond those of the + underlying platform, consider the following guidelines for optimal + performance. +

+

+
+

+
\ No newline at end of file diff --git a/references/html_output/AI-requirements-rancher.html b/references/html_output/AI-requirements-rancher.html new file mode 100644 index 000000000..5aa5ff4c8 --- /dev/null +++ b/references/html_output/AI-requirements-rancher.html @@ -0,0 +1,105 @@ +

SUSE Rancher Prime requirements

+ +
+

1. Minimum hardware requirements

+ +

Nodes for HA setup

At least 3 nodes. +

+
RAM

A minimum of 32 GB of RAM. +

+
CPU

At least 8 CPU cores. +

+
Disk space

At least 200 GB of storage, preferably SSD. +

+
+
+

2. For more information

+ +

For more detailed recommendations, refer to the following official + documentation: +

+

+
+
\ No newline at end of file diff --git a/references/html_output/AI-requirements-security.html b/references/html_output/AI-requirements-security.html new file mode 100644 index 000000000..b0e16f065 --- /dev/null +++ b/references/html_output/AI-requirements-security.html @@ -0,0 +1,127 @@ +

SUSE Security requirements

+ +
+

1. Minimum hardware requirements

+ +

Nodes for HA setup

The following container instances run on existing cluster nodes: +

+

  • 1 Manager instance +

    +
  • 3 Controller instances +

    +
  • 1 Enforcer instance on each cluster node +

    +
  • 2 Scanner & Updater instances +

    +
+
RAM

A minimum of 2 GB of RAM. +

+
CPU

At least 2 CPU cores. +

+
Disk space

At least 5 GB of storage, preferably SSD. +

+
+
+

2. For more information

+ +

For more detailed recommendations, refer to the following official + documentation: +

+

+
+
\ No newline at end of file