Skip to content
This repository was archived by the owner on Jul 29, 2024. It is now read-only.

Commit fe0b2c2

Browse files
committed
Typos
1 parent c79bbc1 commit fe0b2c2

File tree

10 files changed

+12
-12
lines changed

10 files changed

+12
-12
lines changed

src/components/PageLayout/PageFooter.jsx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ const PageFooter = () => (
104104
<Typography variant="p2">
105105
Copyright © {new Date().getFullYear()} Delta Lake, a series of LF
106106
Projects, LLC. For web site terms of use, trademark policy and other
107-
project polcies please see{" "}
107+
project policies please see{" "}
108108
<Link href="https://lfprojects.org" newTab>
109109
https://lfprojects.org
110110
</Link>

src/pages/latest/concurrency-control.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ operate in three stages:
5151

5252
The following table describes which pairs of write operations can conflict. Compaction refers to [file compaction operation](/latest/best-practices#compact-files) written with the option dataChange set to false.
5353

54-
| | **INSERT** | **UPDATE, DELTE, MERGE INTO** | **OPTIMIZE** |
54+
| | **INSERT** | **UPDATE, DELETE, MERGE INTO** | **OPTIMIZE** |
5555
| ------------------------------ | --------------- | ----------------------------- | ------------ |
5656
| **INSERT** | Cannot conflict | | |
5757
| **UPDATE, DELETE, MERGE INTO** | Can conflict | Can conflict | |

src/pages/latest/delta-batch.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ For many Delta Lake operations on tables, you enable integration with Apache Spa
1313

1414
Delta Lake supports creating two types of tables --- tables defined in the metastore and tables defined by path.
1515

16-
To work with metastore-defined tables, you must enable integration with Apache Spark DataSourceV2 and Catalog APIs by setting configurations when you create a new `SparkSession`. See [Configure SparkSesion](#configure-sparksession).
16+
To work with metastore-defined tables, you must enable integration with Apache Spark DataSourceV2 and Catalog APIs by setting configurations when you create a new `SparkSession`. See [Configure SparkSession](#configure-sparksession).
1717

1818
You can create tables in the following ways.
1919

@@ -1347,7 +1347,7 @@ pyspark --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension" --
13471347

13481348
## Configure storage credentials
13491349

1350-
Delta Lake uses Hadoop FileSystem APIs to access the storage systems. The credentails for storage systems usually can be set through Hadoop configurations. Delta Lake provides multiple ways to set Hadoop configurations similar to Apache Spark.
1350+
Delta Lake uses Hadoop FileSystem APIs to access the storage systems. The credentials for storage systems usually can be set through Hadoop configurations. Delta Lake provides multiple ways to set Hadoop configurations similar to Apache Spark.
13511351

13521352
### Spark configurations
13531353

@@ -1365,7 +1365,7 @@ Spark SQL will pass all of the current [SQL session configurations](http://spark
13651365

13661366
Besides setting Hadoop file system configurations through the Spark (cluster) configurations or SQL session configurations, Delta supports reading Hadoop file system configurations from `DataFrameReader` and `DataFrameWriter` options (that is, option keys that start with the `fs.` prefix) when the table is read or written, by using `DataFrameReader.load(path)` or `DataFrameWriter.save(path)`.
13671367

1368-
For example, you can pass your storage credentails through DataFrame options:
1368+
For example, you can pass your storage credentials through DataFrame options:
13691369

13701370
<CodeTabs>
13711371

src/pages/latest/delta-storage.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ that S3 is lacking.
203203

204204
- All of the requirements listed in [\_](#requirements-s3-single-cluster)
205205
section
206-
- In additon to S3 credentials, you also need DynamoDB operating permissions
206+
- In addition to S3 credentials, you also need DynamoDB operating permissions
207207

208208
#### Quickstart (S3 multi-cluster)
209209

src/pages/latest/delta-streaming.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,7 @@ For applications with more lenient latency requirements, you can save computing
279279
Available in Delta Lake 2.0.0 and above.
280280
</Info>
281281

282-
The command `foreachBatch` allows you to specify a function that is executed on the output of every micro-batch after arbitrary transformations in the streaming query. This allows implementating a `foreachBatch` function that can write the micro-batch output to one or more target Delta table destinations. However, `foreachBatch` does not make those writes idempotent as those write attempts lack the information of whether the batch is being re-executed or not. For example, rerunning a failed batch could result in duplicate data writes.
282+
The command `foreachBatch` allows you to specify a function that is executed on the output of every micro-batch after arbitrary transformations in the streaming query. This allows implementing a `foreachBatch` function that can write the micro-batch output to one or more target Delta table destinations. However, `foreachBatch` does not make those writes idempotent as those write attempts lack the information of whether the batch is being re-executed or not. For example, rerunning a failed batch could result in duplicate data writes.
283283

284284
To address this, Delta tables support the following `DataFrameWriter` options to make the writes idempotent:
285285

src/pages/latest/delta-update.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -455,7 +455,7 @@ You can reduce the time taken by merge using the following approaches:
455455

456456
</CodeTabs>
457457

458-
will make the query faster as it looks for matches only in the relevant partitions. Furthermore, it will also reduce the chances of conflicts with other concurrent operations. See [concurency control](/latest/concurrency-control) for more details.
458+
will make the query faster as it looks for matches only in the relevant partitions. Furthermore, it will also reduce the chances of conflicts with other concurrent operations. See [concurrency control](/latest/concurrency-control) for more details.
459459

460460
- **Compact files**: If the data is stored in many small files, reading the data to search for matches can become slow. You can compact small files into larger files to improve read throughput. See [best practices for compaction](/latest/best-practices/#compact-files) for details.
461461

src/pages/latest/integrations.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Access Delta tables from external data processing engines
3-
description: Docs for accessesing Delta tables from external data processing engines
3+
description: Docs for accessing Delta tables from external data processing engines
44
---
55

66
You can access Delta tables from Apache Spark and [other data processing systems](https://delta.io/integrations/). Here is the list of integrations that enable you to access Delta tables from external data processing engines.

src/pages/latest/porting.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ migrating from older to newer versions of Delta Lake.
122122

123123
Delta Lake 1.2.1, 2.0.0 and 2.1.0 have a bug in their DynamoDB-based S3 multi-cluster configuration implementations where an incorrect timestamp value was written to DynamoDB. This caused [DynamoDB’s TTL](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) feature to cleanup completed items before it was safe to do so. This has been fixed in Delta Lake versions 2.0.1 and 2.1.1, and the TTL attribute has been renamed from `commitTime` to `expireTime`.
124124

125-
If you already have TTL enabled on your DynamoDB table using the old attribute, you need to disable TTL for that attribute and then enable it for the new one. You may need to wait an hour between these two operations, as TTL settings changes may take some time to propagate. See the DynamoDB docs [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/time-to-live-ttl-before-you-start.html). If you don’t do this, DyanmoDB’s TTL feature will not remove any new and expired entries. There is no risk of data loss.
125+
If you already have TTL enabled on your DynamoDB table using the old attribute, you need to disable TTL for that attribute and then enable it for the new one. You may need to wait an hour between these two operations, as TTL settings changes may take some time to propagate. See the DynamoDB docs [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/time-to-live-ttl-before-you-start.html). If you don’t do this, DynamoDB’s TTL feature will not remove any new and expired entries. There is no risk of data loss.
126126

127127
```bash
128128
# Disable TTL on old attribute

src/pages/latest/quick-start.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -373,7 +373,7 @@ deltaTable.toDF().show();
373373
You should see that some of the existing rows have been updated and new rows
374374
have been inserted.
375375

376-
For more information on these operations, see [Table delets, updates, and merges](/latestl/delta-update).
376+
For more information on these operations, see [Table deletes, updates, and merges](/latestl/delta-update).
377377

378378
## Read older versions of data using time travel
379379

static/quickstart_docker/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ The current version is `delta-spark_2.12:3.0.0` which corresponds to Apache Spar
202202

203203
1. Open a bash shell (if on windows use git bash, WSL, or any shell configured for bash commands)
204204

205-
2. Run a container from the image with a JuypterLab entrypoint
205+
2. Run a container from the image with a JupyterLab entrypoint
206206

207207
```bash
208208
# Build entry point

0 commit comments

Comments
 (0)