diff --git a/content/docs/guides/databricks-federated-sql.md b/content/docs/guides/databricks-federated-sql.md
new file mode 100644
index 0000000000..8a14530a8f
--- /dev/null
+++ b/content/docs/guides/databricks-federated-sql.md
@@ -0,0 +1,259 @@
+---
+title: Run federated queries on Neon Postgres from Databricks
+subtitle: Learn how to connect Databricks to your Neon Postgres database using lakehouse federation to query data in place.
+enableTableOfContents: true
+isDraft: false
+updatedOn: '2025-06-10T00:00:00.000Z'
+---
+
+Databricks lakehouse federation allows you to run federated queries against external data sources directly from Databricks. This means you can connect your Databricks workspace to your Neon Postgres database and query it without needing to copy or move the data. This enables you to combine data from Neon with other data sources managed by Databricks and leverage the powerful analytics and governance capabilities of the Databricks lakehouse Platform.
+
+This guide will walk you through setting up lakehouse federation to query data residing in your Neon Postgres database.
+
+## Why use lakehouse federation with Neon Postgres?
+
+Lakehouse federation provides several benefits when integrating Neon Postgres with Databricks:
+
+- **Query data in place:** Access and query your Neon Postgres data where it lives, eliminating the need for complex and time-consuming ETL processes to move data into Databricks. This ensures you're always working with the freshest data.
+- **Faster insights:** Get to insights quicker by directly querying live data. This is ideal for ad hoc reporting, proof-of-concept work, and the exploratory phase of new data pipelines or reports.
+- **Unified governance:** Manage access to your Neon data through Databricks Unity Catalog. This includes fine-grained access control (table and view-level permissions), data lineage to track how data is used, and auditing capabilities.
+- **Leverage external compute:** For complex analytical workloads, you can use Databricks compute resources to run queries against your Neon Postgres data, taking advantage of Databricks' performance optimizations and scalability.
+
+## Prerequisites
+
+Before you begin, ensure you have the following:
+
+### Neon prerequisites
+
+A source [Neon project](/docs/manage/projects#create-a-project) with a database containing the data you want to query. If you're just testing this out and need some data to play with, you run the following statements from the [Neon SQL Editor](/docs/get-started-with-neon/query-with-neon-sql-editor) or an SQL client such as [psql](/docs/connect/query-with-psql-editor) to create a table with sample data:
+
+ ```sql shouldWrap
+ CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL);
+ INSERT INTO playing_with_neon(name, value)
+ SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i);
+ ```
+
+To connect to your Neon Postgres database from Databricks, you'll need your connection details. You can find them in your Neon Project dashboard under **Connection Details**. Learn more: [Connect from any application](/docs/connect/connect-from-any-app).
+
+> It's recommended that you create a dedicated Postgres role in Neon with the principle of least privilege (e.g., `SELECT` on specific tables/schemas) for use with Databricks.
+
+### Databricks prerequisites
+
+- A Databricks workspace enabled for Unity Catalog.
+- **Compute requirements:**
+ - Databricks clusters must use Databricks Runtime 13.3 LTS or above and be configured with **Standard** or **Dedicated** access mode.
+ - SQL warehouses must be **Pro** or **Serverless** and use version 2023.40 or above.
+- **Permissions required in Databricks:**
+ - To create a connection: You must be a metastore admin or a user with the `CREATE CONNECTION` privilege on the Unity Catalog metastore attached to the workspace.
+ - To create a foreign catalog: You must have the `CREATE CATALOG` permission on the metastore and be either the owner of the connection or have the `CREATE FOREIGN CATALOG` privilege on the connection.
+
+## Setting up lakehouse federation for Neon Postgres
+
+Follow these steps to configure Databricks to query your Neon Postgres database.
+
+### Create a connection to Neon Postgres
+
+A connection in Databricks allows you to define how to connect to an external data source like Neon Postgres. It stores the necessary details, such as the hostname, port, user credentials, and connection type. This connection can then be used to create a foreign catalog that mirrors your Neon database structure in Databricks.
+
+You can create a connection using either Catalog Explorer or SQL.
+
+
+
+
+
+1. In your Databricks workspace, navigate to **Catalog**.
+2. In the Catalog Explorer, click the **+ Add Data** button at the top of the left pane and select **Create a connection**.
+3. On the **Set up connection** wizard:
+ - **Connection name:** Enter a user-friendly name for your connection (e.g., `neon_production_connection`).
+ - **Connection type:** Select `PostgreSQL` from the dropdown.
+ - **(Optional) Comment:** Add a description for the connection.
+ - Click **Next**.
+ 
+4. On the **Authentication** page, enter the connection properties for your Neon Postgres instance:
+ - **Host:** Your Neon Postgres hostname (e.g., `ep-cool-darkness-123456.us-east-2.aws.neon.tech`).
+ - **Port:** `5432`
+ - **User:** The Neon Postgres role
+ - **Password:** The password for the Neon Postgres role.
+5. Click **Create connection**.
+6. On the **Catalog Basics** page, enter your **Database name** in Neon Postgres that you want to query (e.g., `neondb`, `postgres`, or your custom database name).
+7. Click **Test connection** to verify the connection details. If successful, you will see a confirmation message.
+8. Click **Create catalog** to finalize the connection setup.
+9. On the **Access** page, select the workspaces in which users can access this connection and grant appropriate privileges. You can assing **READ ONLY (Data Reader)** or **READ WRITE (Data Editor)** access depending on your use case.
+10. (Optional) On the **Metadata** page, specify tags for the connection.
+11. Click **Create connection**.
+
+
+
+
+
+You can create a connection by running a `CREATE CONNECTION` SQL command in a Databricks notebook or the SQL query editor. For security best practices, it is recommended to use Databricks Secrets to store your Neon credentials securely.
+
+1. Set up Databricks secrets to store your Neon Postgres credentials. Follow the [Databricks Secrets documentation](https://docs.databricks.com/aws/en/security/secrets) to create a secret scope and add secrets for your Neon Postgres user and password.
+
+2. Run the following SQL command in the Databricks SQL editor:
+
+ ```sql
+ CREATE CONNECTION IF NOT EXISTS neon_production_connection
+ TYPE POSTGRESQL
+ OPTIONS (
+ host '',
+ port '5432',
+ user secret ('', ''),
+ password secret ('', '')
+ );
+ ```
+
+ > Replace placeholders with your actual Neon hostname, secret scope, and secret keys.
+
+ If not using secrets (less secure) run the following command instead:
+
+ ```sql
+ CREATE CONNECTION IF NOT EXISTS neon_production_connection
+ TYPE POSTGRESQL
+ OPTIONS (
+ host '',
+ port '5432',
+ user '',
+ password ''
+ );
+ ```
+
+
+
+
+
+### Create a Foreign Catalog for your Neon Database
+
+A foreign catalog in Unity Catalog mirrors the database structure (schemas and tables) from your Neon Postgres instance, making it accessible for querying within Databricks.
+
+
+
+
+
+You can skip this step if you created the connection using the Catalog Explorer, as it automatically creates a foreign catalog for you. The catalog will be named `_catalog`, where `` is the name you provided when creating the connection (e.g., `neon_production_connection_catalog`).
+
+
+
+
+
+Run the following `CREATE FOREIGN CATALOG` SQL command in a Databricks notebook or the SQL query editor:
+
+```sql
+CREATE FOREIGN CATALOG IF NOT EXISTS neon_federated_db_via_sql
+USING CONNECTION neon_production_connection
+OPTIONS (database '');
+```
+
+> Replace `` with the actual name of the database in your Neon project that this catalog should mirror.
+
+Unity Catalog will now sync the metadata from your Neon database.
+
+
+
+
+
+## Querying Neon Postgres from Databricks
+
+Once the connection and foreign catalog are set up, you can query tables in your Neon Postgres database using the standard three-level namespace: `..`.
+
+
+
+
+
+If you are following the `playing_with_neon` example from the prerequisites, you can run the following SQL query in the Databricks SQL editor:
+
+```sql
+SELECT *
+FROM neon_production_connection_catalog.public.playing_with_neon;
+```
+
+> Here, `neon_production_connection_catalog` is the foreign catalog created for your Neon Postgres connection. `public` is the schema, and `playing_with_neon` is the table. You'll need to replace these with your actual catalog, schema, and table names.
+
+
+
+
+
+
+
+If you are following the `playing_with_neon` example from the prerequisites, you can run the following SQL query in the Databricks SQL editor:
+
+```sql
+SELECT *
+FROM neon_federated_db_via_sql.public.playing_with_neon;
+```
+
+> Here, `neon_production_connection_catalog` is the foreign catalog created for your Neon Postgres connection. `public` is the schema, and `playing_with_neon` is the table. You'll need to replace these with your actual catalog, schema, and table names.
+
+
+
+
+
+
+
+Databricks will translate this SQL statement into a query that runs against your Neon Postgres database, fetching the results directly into your Databricks environment.
+
+### Viewing system generated federated queries
+
+To understand how Databricks translates your queries for the federated source, you can run an `EXPLAIN FORMATTED SQL` statement for your query.
+
+```sql
+EXPLAIN FORMATTED
+SELECT *
+FROM neon_production_connection_catalog.public.playing_with_neon
+WHERE value > 0.5;
+```
+
+
+
+This helps in understanding what parts of the query are pushed down to Neon Postgres for execution.
+
+## Data type mappings
+
+When you read data from Neon Postgres into Databricks Spark, data types are mapped as follows. This is important for understanding how your Neon data will be represented in Databricks.
+
+| Postgres Type | Spark Type |
+| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- |
+| `numeric` | `DecimalType` |
+| `int2` | `ShortType` |
+| `int4` (if not signed) | `IntegerType` |
+| `int8`, `oid`, `xid`, `int4` (if signed) | `LongType` |
+| `float4` | `FloatType` |
+| `double precision`, `float8` | `DoubleType` |
+| `char` | `CharType` |
+| `name`, `varchar`, `tid` | `VarcharType` |
+| `bpchar`, `character varying`, `json`, `money`, `point`, `super`, `text` | `StringType` |
+| `bytea`, `geometry`, `varbyte` | `BinaryType` |
+| `bit`, `bool` | `BooleanType` |
+| `date` | `DateType` |
+| `tabstime`, `time`, `time with time zone`, `timetz`, `time without time zone`, `timestamp with time zone`, `timestamp`, `timestamptz`, `timestamp without time zone`\* | `TimestampType`/`TimestampNTZType` |
+| `Postgresql array type`\*\* | `ArrayType` |
+
+_When you read from Postgres, Postgres `Timestamp` is mapped to Spark `TimestampType` if `preferTimestampNTZ = false` (default). Postgres `Timestamp` is mapped to `TimestampNTZType` if `preferTimestampNTZ = true`._
+
+_Limited array types are supported._
+
+## Supported pushdowns for Postgres
+
+Refer to the [Databricks Lakehouse Federation - Supported pushdowns](https://docs.databricks.com/aws/en/query-federation/postgresql?language=Catalog%C2%A0Explorer#supported-pushdowns) for up-to-date information on the supported pushdowns for Postgres sources like Neon.
+
+## Best practices
+
+- **Dedicated Neon role:** Create a dedicated Postgres role in Neon with the principle of least privilege (e.g., `SELECT` on specific tables/schemas).
+- **Secure credentials:** Always use Databricks secrets for storing Neon database credentials.
+- **Network configuration:** Ensure your network allows connectivity from Databricks compute to your Neon endpoint. Review [Neon's IP Allow](/docs/introduction/ip-allow) settings to allow access from Databricks.
+- **Query optimization:** Use `WHERE` clauses to filter data as much as possible at the source (Neon) to minimize data transfer and improve query performance. Understand which operations can be pushed down.
+- **Monitor usage:** Regularly monitor query performance and resource usage on both Databricks and your Neon instance.
+
+## Conclusion
+
+Databricks lakehouse federation provides a powerful and seamless way to query your Neon Postgres data directly, without data movement. By integrating Neon into your Databricks environment, you can unlock new insights, streamline your data workflows, and leverage the advanced analytics capabilities of the Databricks platform.
+
+## References
+
+- [Databricks Lakehouse Federation Documentation](https://docs.databricks.com/aws/en/query-federation)
+- [Databricks - Connect to PostgreSQL using Lakehouse Federation](https://docs.databricks.com/aws/en/query-federation/postgresql)
+- [Databricks Unity Catalog Documentation](https://docs.databricks.com/aws/en/data-governance/unity-catalog)
+- [Databricks Secrets Documentation](https://docs.databricks.com/aws/en/security/secrets)
+- [Neon Documentation](/docs)
+
+
diff --git a/content/docs/guides/logical-replication-databricks.md b/content/docs/guides/logical-replication-databricks.md
new file mode 100644
index 0000000000..9223224aa7
--- /dev/null
+++ b/content/docs/guides/logical-replication-databricks.md
@@ -0,0 +1,284 @@
+---
+title: Replicate data to Databricks with Airbyte
+subtitle: Learn how to replicate data from Neon to Databricks Lakehouse with Airbyte
+enableTableOfContents: true
+isDraft: false
+updatedOn: '2025-06-09T00:00:00.000Z'
+---
+
+Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. In this guide, you will learn how to define your Neon Postgres database as a data source in Airbyte so that you can stream data to Databricks Lakehouse.
+
+[Airbyte](https://airbyte.com) is an open-source data integration platform that moves data from a source to a destination system. Airbyte offers a large library of connectors for various data sources and destinations.
+
+[Databricks](https://databricks.com) is a unified, open analytics platform that combines the best of data lakes and data warehouses into a "lakehouse" architecture. Databricks allows organizations to build, deploy, and manage data, analytics, and AI solutions at scale.
+
+## Prerequisites
+
+- A source [Neon project](/docs/manage/projects#create-a-project) with a database containing the data you want to replicate. If you're just testing this out and need some data to play with, you run the following statements from the [Neon SQL Editor](/docs/get-started-with-neon/query-with-neon-sql-editor) or an SQL client such as [psql](/docs/connect/query-with-psql-editor) to create a table with sample data:
+
+ ```sql shouldWrap
+ CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL);
+ INSERT INTO playing_with_neon(name, value)
+ SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i);
+ ```
+
+- An [Airbyte cloud account](https://airbyte.com/) or a self-hosted Airbyte instance
+- A [Databricks account](https://databricks.com/try-databricks) with an active workspace.
+- Read the [important notices about logical replication in Neon](/docs/guides/logical-replication-neon#important-notices) before you begin.
+
+## Prepare your source Neon database
+
+This section describes how to prepare your source Neon database (the publisher) for replicating data.
+
+### Enable logical replication in Neon
+
+
+Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect.
+
+
+To enable logical replication in Neon:
+
+1. Select your project in the Neon Console.
+2. On the Neon **Dashboard**, select **Settings**.
+3. Select **Logical Replication**.
+4. Click **Enable** to enable logical replication.
+
+You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](/docs/get-started-with-neon/query-with-neon-sql-editor) or an SQL client such as [psql](/docs/connect/query-with-psql-editor):
+
+```sql
+SHOW wal_level;
+ wal_level
+-----------
+ logical
+```
+
+### Create a Postgres role for replication
+
+It's recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege.
+
+
+
+
+
+The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole)
+
+```bash
+neon roles create --name replication_user
+```
+
+
+
+
+
+To create a role in the Neon Console:
+
+1. Navigate to the [Neon Console](https://console.neon.tech).
+2. Select a project.
+3. Select **Branches**.
+4. Select the branch where you want to create the role.
+5. Select the **Roles & Databases** tab.
+6. Click **Add Role**.
+7. In the role creation dialog, specify a role name.
+8. Click **Create**. The role is created, and you are provided with the password for the role.
+
+
+
+
+
+The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](/docs/reference/cli-roles).
+
+```bash
+curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \
+ -H 'Accept: application/json' \
+ -H "Authorization: Bearer $NEON_API_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "role": {
+ "name": "replication_user"
+ }
+}' | jq
+```
+
+
+
+
+
+### Grant schema access to your Postgres role
+
+If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`:
+
+```sql
+GRANT USAGE ON SCHEMA public TO replication_user;
+GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user;
+ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user;
+```
+
+Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication.
+
+### Create a replication slot
+
+Airbyte requires a dedicated replication slot. Only one source should be configured to use this replication slot.
+
+Airbyte uses the `pgoutput` plugin in Postgres for decoding WAL changes into a logical replication stream. To create a replication slot called `airbyte_slot` that uses the `pgoutput` plugin, run the following command on your database using your replication role:
+
+```sql
+SELECT pg_create_logical_replication_slot('airbyte_slot', 'pgoutput');
+```
+
+`airbyte_slot` is the name assigned to the replication slot. You will need to provide this name when you set up your Airbyte source.
+
+### Create a publication
+
+Perform the following steps for each table you want to replicate data from:
+
+1. Add the replication identity (the method of distinguishing between rows) for each table you want to replicate:
+
+ ```sql
+ ALTER TABLE REPLICA IDENTITY DEFAULT;
+ ```
+
+ In rare cases, if your tables use data types that support [TOAST](https://www.postgresql.org/docs/current/storage-toast.html) or have very large field values, consider using `REPLICA IDENTITY FULL` instead:
+
+ ```sql
+ ALTER TABLE REPLICA IDENTITY FULL;
+ ```
+
+2. Create the Postgres publication. Include all tables you want to replicate as part of the publication:
+
+ ```sql
+ CREATE PUBLICATION airbyte_publication FOR TABLE ;
+ ```
+
+ The publication name is customizable. Refer to the [Postgres docs](https://www.postgresql.org/docs/current/logical-replication-publication.html) if you need to add or remove tables from your publication.
+
+
+The Airbyte UI currently allows selecting any table for Change Data Capture (CDC). If a table is selected that is not part of the publication, it will not be replicated even though it is selected. If a table is part of the publication but does not have a replication identity, the replication identity will be created automatically on the first run if the Postgres role you use with Airbyte has the necessary permissions.
+
+
+## Create a Postgres source in Airbyte
+
+1. From your Airbyte Cloud account, or your self-hosted Airbyte instance, select **Sources** from the left navigation bar, search for **Postgres**, and then create a new Postgres source.
+2. Enter the connection details for your Neon database. You can find your database connection details by clicking the **Connect** button on your **Project Dashboard**.
+
+ > Make sure to select the `replication_user` role you created earlier when connecting to your Neon database. This role must have the `REPLICATION` privilege and access to the schemas and tables you want to replicate.
+
+ For example, given a connection string like this:
+
+ ```bash shouldWrap
+ postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require
+ ```
+
+ Enter the details in the Airbyte **Create a source** dialog as shown below. Your values will differ.
+
+ - **Host**: ep-cool-darkness-123456.us-east-2.aws.neon.tech
+ - **Port**: 5432
+ - **Database Name**: dbname
+ - **Username**: replication_user
+ - **Password**: AbC123dEf
+
+ 
+
+3. Under **Optional fields**, list the schemas you want to sync. Schema names are case-sensitive, and multiple schemas may be specified. By default, `public` is the only selected schema.
+4. Select an SSL mode. You will most frequently choose `require` or `verify-ca`. Both of these options always require encryption. The `verify-ca` mode requires a certificate. Refer to [Connect securely](/docs/connect/connect-securely) for information about the location of certificate files you can use with Neon.
+5. Under **Advanced**:
+
+ - Select **Read Changes using Write-Ahead Log (CDC)** from available replication methods.
+ - In the **Replication Slot** field, enter the name of the replication slot you created previously: `airbyte_slot`.
+ - In the **Publication** field, enter the name of the publication you created previously: `airbyte_publication`.
+ 
+
+### Allow inbound traffic
+
+If you are on Airbyte Cloud, and you are using Neon's **IP Allow** feature to limit IP addresses that can connect to Neon, you will need to allow inbound traffic from Airbyte's IP addresses. You can find a list of IPs that need to be allowlisted in the [Airbyte Security docs](https://docs.airbyte.com/operating-airbyte/security). For self-hosted Airbyte, you will need to allow inbound traffic from the IP address of your Airbyte instance. For information about configuring allowed IPs in Neon, see [Configure IP Allow](/docs/manage/projects#configure-ip-allow).
+
+### Complete the source setup
+
+To complete your source setup, click **Set up source** in the Airbyte UI. Airbyte will test the connection to your database. Once this succeeds, you've successfully configured an Airbyte Postgres source for your Neon database.
+
+## Configure Databricks Lakehouse as a destination
+
+To complete your data integration setup, you can now add Databricks Lakehouse as your destination.
+
+### Prerequisites
+
+- **Databricks Server Hostname**: The hostname of your Databricks SQL Warehouse or All-Purpose Cluster (e.g., `adb-xxxxxxxxxxxxxxx.x.azuredatabricks.net` or `dbc-xxxxxxxx-xxxx.cloud.databricks.com`). You can find this in the Connection Details of your SQL Warehouse or Cluster.
+- **Databricks HTTP Path**: The HTTP Path for your SQL Warehouse or Cluster. Found in the Connection Details.
+- **Databricks Personal Access Token (PAT)**: A token used to authenticate. You can generate it from the same connection details page in Databricks.
+- **Databricks Unity Catalog Name**: The name of the Unity Catalog you wish to use.
+- **(Optional) Default Schema**: The schema within the Unity Catalog where tables will be created if not otherwise specified.
+
+
+Ensure the Databricks SQL Warehouse or Cluster is running and accessible. The PAT must have sufficient permissions within the specified Unity Catalog and for the operations Airbyte will perform (e.g., `CREATE TABLE`, `CREATE SCHEMA` if the `Default Schema` doesn't exist, `INSERT data`).
+
+
+### Set up Databricks Lakehouse as a destination
+
+1. Navigate to Airbyte.
+2. Select **Destinations** from the left navigation bar, search for **Databricks Lakehouse**, and then select it.
+3. Click **+ New destination** and choose **Databricks Lakehouse**.
+4. Configure the Databricks Lakehouse destination:
+
+ | Field | Description | Example (Illustrative) |
+ | ---------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- |
+ | **Destination name** | A descriptive name for your destination in Airbyte. | `Databricks Lakehouse` |
+ | **Server Hostname** | The Server Hostname of your Databricks SQL Warehouse or cluster. | `dbc-a1b2345c-d6e7.cloud.databricks.com` |
+ | **HTTP Path** | The HTTP Path from your Databricks SQL Warehouse or cluster's connection details. | `/sql/1.0/warehouses/1234567890abcdef` |
+ | **Databricks Unity Catalog Name** (Required) | The name of the Unity Catalog where data will be written. | `workspace` |
+ | **Authentication** | Choose **Personal Access Token**. | `Personal Access Token` |
+ | _Personal Access Token_ | Enter your Databricks PAT. | `dapi1234567890abcdef1234567890abcd` |
+ | **Port** (Optional Fields) | The port for the Databricks connection. | `443` (Default) |
+ | **Default Schema** (Optional Fields) | The default schema within the Unity Catalog to write to. Airbyte will create this schema if it doesn't exist. | `airbyte_neon_data` |
+ | **Purge Staging Files and Tables** (Optional Fields) | Enable to automatically clean up temporary staging files and tables used during the replication process. Usually recommended. | `Enabled` (Default) |
+ | **Raw Table Schema Name** (Optional Fields) | Schema used for storing raw data tables (_airbyte_raw_\*). | `airbyte_internal` (Default) |
+
+ 
+
+5. When you're finished filling in the fields, click **Set up destination**. Airbyte will test the connection to your Databricks Lakehouse environment.
+
+## Set up a connection
+
+In this step, you'll set up a connection between your Neon Postgres source and your Databricks Lakehouse destination.
+
+To set up a new connection:
+
+1. Navigate to Airbyte.
+2. Select **Connections** from the left navigation bar, then click **+ New connection**.
+3. Select the existing Postgres source you created earlier.
+4. Select the existing Databricks Lakehouse destination you created earlier.
+5. For the **Sync Mode**, select **Replicate Source**. Then, choose the specific tables from your Neon Postgres source that you want to replicate. Ensure you only select tables that are part of the PostgreSQL publication you created earlier (e.g., `playing_with_neon`).
+6. Click **Next**.
+ 
+7. Configure the sync frequency and other settings as needed. Select **Source defined** for the **Destination Namespace**.
+ 
+
+Your first sync will start automatically soon, or you can initiate it manually if you opted for a manual schedule. Airbyte will then replicate data from Neon Postgres to your Databricks Lakehouse. The time this initial sync takes will depend on the amount of data.
+
+## Verify the replication
+
+After the sync operation is complete, you can verify the replication by navigating to your Databricks workspace.
+
+1. Go to your Databricks workspace.
+2. Navigate to **SQL Editor** from the left sidebar.
+3. Run the following SQL query to check the replicated data:
+
+ ```sql
+ SELECT * FROM workspace.public.playing_with_neon;
+ ```
+
+ > In the query, substitute `workspace` with the Databricks Unity Catalog Name you configured in Airbyte. The schema `public` should be replaced with the Default Schema you specified in the destination settings. Similarly, replace `playing_with_neon` with the name of the table you replicated.
+
+ 
+
+This will display the data replicated from your Neon Postgres into your Databricks Lakehouse.
+
+## References
+
+- [Airbyte Documentation](https://docs.airbyte.com/)
+- [Airbyte Databricks Lakehouse Destination Connector](https://docs.airbyte.com/integrations/destinations/databricks)
+- [Databricks Documentation](https://docs.databricks.com/)
+- [Databricks Unity Catalog documentation](https://docs.databricks.com/aws/en/data-governance/unity-catalog)
+- [Neon Logical Replication](/docs/guides/logical-replication-neon)
+- [Logical replication - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html)
+- [Publications - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-publication.html)
+
+
diff --git a/content/docs/guides/logical-replication-guide.md b/content/docs/guides/logical-replication-guide.md
index 0530b3c94f..cc39fd837c 100644
--- a/content/docs/guides/logical-replication-guide.md
+++ b/content/docs/guides/logical-replication-guide.md
@@ -51,6 +51,8 @@ To get started, jump into one of our step-by-step logical replication guides.
+
+
diff --git a/content/docs/sidebar.yaml b/content/docs/sidebar.yaml
index 1c2a5824cf..4797915c36 100644
--- a/content/docs/sidebar.yaml
+++ b/content/docs/sidebar.yaml
@@ -800,6 +800,8 @@
slug: https://docs.peerdb.io/mirror/cdc-neon-clickhouse
- title: Confluent
slug: guides/logical-replication-kafka-confluent
+ - title: Databricks
+ slug: guides/logical-replication-databricks
- title: Decodable
slug: guides/logical-replication-decodable
- title: Estuary Flow
diff --git a/public/docs/guides/airbyte_databricks_destination.png b/public/docs/guides/airbyte_databricks_destination.png
new file mode 100644
index 0000000000..4d1cf6f363
Binary files /dev/null and b/public/docs/guides/airbyte_databricks_destination.png differ
diff --git a/public/docs/guides/airbyte_neon_databricks_connection_setup.png b/public/docs/guides/airbyte_neon_databricks_connection_setup.png
new file mode 100644
index 0000000000..3475e4a21c
Binary files /dev/null and b/public/docs/guides/airbyte_neon_databricks_connection_setup.png differ
diff --git a/public/docs/guides/airbyte_neon_databricks_sync_settings.png b/public/docs/guides/airbyte_neon_databricks_sync_settings.png
new file mode 100644
index 0000000000..bf73d77a53
Binary files /dev/null and b/public/docs/guides/airbyte_neon_databricks_sync_settings.png differ
diff --git a/public/docs/guides/databricks-federated-sql-create-external-connection-ui.png b/public/docs/guides/databricks-federated-sql-create-external-connection-ui.png
new file mode 100644
index 0000000000..68dc60ed59
Binary files /dev/null and b/public/docs/guides/databricks-federated-sql-create-external-connection-ui.png differ
diff --git a/public/docs/guides/databricks-federated-sql-explain-formatted-query-example.png b/public/docs/guides/databricks-federated-sql-explain-formatted-query-example.png
new file mode 100644
index 0000000000..1bb9ca2ab8
Binary files /dev/null and b/public/docs/guides/databricks-federated-sql-explain-formatted-query-example.png differ
diff --git a/public/docs/guides/databricks-federated-sql-query-example-catalog-explorer.png b/public/docs/guides/databricks-federated-sql-query-example-catalog-explorer.png
new file mode 100644
index 0000000000..b85acf44cc
Binary files /dev/null and b/public/docs/guides/databricks-federated-sql-query-example-catalog-explorer.png differ
diff --git a/public/docs/guides/databricks-federated-sql-query-example-sql.png b/public/docs/guides/databricks-federated-sql-query-example-sql.png
new file mode 100644
index 0000000000..7227d2d5c5
Binary files /dev/null and b/public/docs/guides/databricks-federated-sql-query-example-sql.png differ
diff --git a/public/docs/guides/databricks_sql_editor_replicated_data.png b/public/docs/guides/databricks_sql_editor_replicated_data.png
new file mode 100644
index 0000000000..66995a7197
Binary files /dev/null and b/public/docs/guides/databricks_sql_editor_replicated_data.png differ
diff --git a/public/images/technology-logos/databricks.svg b/public/images/technology-logos/databricks.svg
new file mode 100644
index 0000000000..553c3e4124
--- /dev/null
+++ b/public/images/technology-logos/databricks.svg
@@ -0,0 +1,19 @@
+
\ No newline at end of file