Back

Простой конвейер потока данных (Python) 2.5

Dataflow

Stamps

Initial conditions

Categories:

Активируйте Google Cloud Shell

Google Cloud Shell — это виртуальная машина, на которой загружены инструменты разработки. Он

Google Cloud Shell обеспечивает доступ к вашим ресурсам Google Cloud из командной строки.

  1. В консоли Cloud на правой верхней панели инструментов нажмите кнопку «Открыть Cloud Shell».

     

    Выделенный значок Cloud Shell

  2. Нажмите Продолжить .

Подготовка среды и подключение к ней занимает несколько минут. Когда вы подключены, вы уже прошли аутентификацию, и для проекта установлен ваш PROJECT_ID . Например:

...
Prog

gcloud — это инструмент командной строки для Google Cloud. Он предустановлен в Cloud Shell и поддерживает завершение табу/p>

Вы можете указать имя активной учетной записи с помощью этой команды:

CODE...

Вы можете указать идентификатор проекта с помощью этой команды:

 

CODE...
...
Prog

Open the SSH terminal and connect to the training VM

You will be running all code from a curated training VM.

  1. In the console, on the Navigation menu (Navigation menu icon), click Compute Engine > VM inst

  2. Locate the line with the instance called training-vm.

  3. On the far right, under Connect, click on SSH to open a terminal window.

  4. In this lab, you will enter CLI commands on the training-vm.

...
Prog

 

Загрузите репозиторий кода для использования в этой лабораторной работе. В терминале SSH Training-VM введите сл/p>

CODE...
...
Prog

Follow these instructions to create a bucket.

  1. In the Console, on the Navigation menu, click Cloud Storage > Buckets.

  2. Click + Create.

  3. Specify the following, and leave the

Property Value (type value or select option as specified)
Name <Project ID>
Location type Multi-region
  1. Click Create.

  2. If you get the Public access will be prevented prompt, select Enforce public access prevention on this bucket and click Confirm.

Record the name of your bucket. You will need it in subsequent tasks.

  1. In the training-vm SSH terminal enter the following to create an environment variable named "BUCKET" and verify that it exists with the echo command:
BUCKET="project_place_holder_text"
echo $BUCKET

Copied!

content_copy

You can use $BUCKET in terminal commands. And if you need to enter the bucket name <your-bucket> in a text field in the console, you can quickly retrieve the name with echo $BUCKET.

Task 3. Pipeline filtering

...
Prog

The goal of this lab is to become familiar with the structure of a Dataflow project and learn how to execute a Dataflow pipeline.

  1. Return to the training-vm SSH terminal and navigate to the /training-data-analyst/courses/data_analysis/lab2/python and view the file grep.py.

  2. View the file with Nano. Do not make any changes to the code:

CODE...

 

  1. Press CTRL+X to exit Nano.

Can you answer these questions about the file grep.py?

...
Prog

Task 4. Execute the pipeline locally

  1. In the training-vm SSH terminal, locally execute grep.py:
CODE...

 

Note: Ignore the warning if any.

The output file will be output.txt. Ioutput-00000-of-00001.

  1. Locate the correct file by examining the file's time:
CODE...

 

  1. Examine the output file(s).

  2. You can replace "-*" below with the appropriate suffix:

CODE...

 

Does the output seem logical?

...
Prog

Task 5. Execute the pipeline on the cloud

  1. Copy some Java files to the cloud. In the training-vm SSH terminal, enter the following command:
CODE...

 

  1. Using Nano, edit the Dataflogrepc.py:
nano grepc.py

Copied!

content_copy

  1. Replace PROJECT, BUCKET, and REGION with the values listed below. Please retain the outside single quotes.
CODE...

 

CODE...

 

CODE...

 

Save the file and close Nano by pressing the CTRL+X key, then type Y, and press Enter.

  1. Submit the Dataflow job to the cloud:
CODE...

 

Because this is such a small job, running on the cloud will take significantly longer than running it locally (on the order of 7-10 minutes).

  1. Return to the browser tab for the console.

  2. On the Navigation menu, click Dataflow and click on your job to monitor progress.

  3. Wait for the Job status to be Succeeded.

  4. Examine the output in the Cloud Storage bucket.

  5. On the Navigation menu, click Cloud Storage > Buckets and click on your bucket.

  6. Click the javahelp directory.

This job generates the file output.txt. If the file is large enough, it will be sharded into multiple parts with names like: output-0000x-of-000y. You can identify the most recent file by name or by the Last modified field.

  1. Click on the file to view it.

Alternatively, you can download the file via the training-vm SSH terminal and view it:

CODE...
...
Prog

Final conditions:

;

Organize your work