PinCompute: A Kubernetes Backed Normal Function Compute Platform for Pinterest | by Pinterest Engineering | Pinterest Engineering Weblog | Oct, 2023

Pinterest Engineering
Pinterest Engineering Blog

Harry Zhang, Jiajun Wang, Yi Li, Shunyao Li, Ming Zong, Haniel Martino, Cathy Lu, Quentin Miao, Hao Jiang, James Wen, David Westbrook | Cloud Runtime Staff

Picture Supply:

Trendy compute platforms are foundational to accelerating innovation and working purposes extra effectively. At Pinterest, we’re evolving our compute platform to offer an application-centric and absolutely managed compute API for the ninetieth percentile of use circumstances. This can speed up innovation by means of platform agility, scalability, and a diminished price of conserving techniques updated, and can enhance effectivity by working our customers’ purposes on Kubernetes-based compute. We check with this subsequent technology compute platform as PinCompute, and our multi-year imaginative and prescient is for PinCompute to run essentially the most mission essential purposes and providers at Pinterest.

PinCompute aligns with the Platform as a Service (PaaS) cloud computing mannequin, in that it abstracts away the undifferentiated heavy lifting of managing infrastructure and Kubernetes and permits customers to deal with the distinctive points of their purposes. PinCompute evolves Pinterest structure with cloud-native principles, together with containers, microservices, and repair mesh, reduces the price of conserving techniques updated by offering and managing immutable infrastructure, working system upgrades, and graviton situations, and delivers prices financial savings by making use of enhanced scheduling capabilities to massive multi-tenant Kubernetes clusters, together with oversubscription, bin packing, useful resource tiering, and trough utilization.

On this article, we talk about the PinCompute primitives, structure, management aircraft and information aircraft capabilities, and showcase the worth that PinCompute has delivered for innovation and effectivity at Pinterest.

PinCompute is a regional Platform-as-a-Service (PaaS) that builds on high of Kubernetes. PinCompute’s structure consists of a number Kubernetes cluster (host cluster) and a number of member Kubernetes clusters (member clusters). The host cluster runs the regional federation management aircraft, and retains monitor of workloads in that area. The member clusters are zonal, and are used for the precise workload executions. Every zone can have a number of member clusters, which strictly aligns with the failure area outlined by the cloud supplier, and clearly defines fault isolation and operation boundaries for the platform to make sure availability and management blast radius. All member clusters share a typical Kubernetes setup throughout management aircraft and information aircraft capabilities, and so they assist heterogeneous capabilities equivalent to completely different workload sorts and {hardware} choices. PinCompute is multi-tenant, the place a wide range of kinds of workloads from completely different groups and organizations share the identical platform. The platform offers needful isolations to make sure it may be shared throughout tenants securely and effectively.

Determine 1: Excessive Stage Structure of PinCompute

Customers entry the platform through Compute APIs to carry out operations on their workloads. We leverage Customized Sources (CR) to outline the sorts of workloads supported by the platform, and the platform affords a spread of workload orchestration capabilities which helps each batch jobs and lengthy working providers in varied kinds. When a workload is submitted to the platform, it first will get persevered with the host cluster’s Kubernetes API. The federation management aircraft will then kick in to carry out workload administration duties wanted on the regional degree, together with quota enforcement, workload sharding, and member cluster choice. Then, the workload shards get propagated to member clusters for execution. The member cluster management aircraft consists of a mix of in-house and open supply operators which might be answerable for orchestrating workloads of various varieties. The federation management aircraft additionally collects execution statuses of workloads from their corresponding member clusters and aggregates them to be consumable through PinCompute APIs.

Determine 2: Workflow for Execution and Standing Aggregation of PinCompute
Determine 3: Workload structure on PinCompute

PinCompute primitives serve heterogeneous workloads throughout Pinterest, from lengthy working, run-to-finish, ML coaching, scheduled run, and extra. These use circumstances are basically divided into three classes: (1) normal objective compute and repair deployment, (2) run-to-finish jobs, and (3) infrastructure providers. Pinterest run-to-finish jobs and infrastructure providers are supported by current Kubernetes native and Pinterest-specific assets, and with our newest ideas on how one can outline easy, intuitive and extendable compute primitives, PinCompute introduces a brand new set of primitives for normal objective compute and repair deployment. These primitives embody PinPod, PinApp, and PinScaler.

PinPod is the essential constructing block for normal objective compute at Pinterest. Just like the native Kubernetes Pod, PinPod inherits the Pod’s essence of being a foundational constructing block whereas offering further Pinterest-specific capabilities. This contains options like per container updates, managed sidecars, information persistence, failovers, and extra that permit PinPod to be simply leveraged as a constructing block underneath varied manufacturing eventualities at Pinterest. PinPod is designed to create a transparent divide between software and infrastructure groups, whereas nonetheless retaining the light-weighted nature of working containers. It solves many current ache factors: for instance, the per container replace can velocity up software rolling updates, scale back useful resource consumption, and get rid of disturbance to person containers throughout infra sidecar upgrades.

PinApp is an abstraction that gives one of the simplest ways to run and handle lengthy working purposes at Pinterest. By leveraging PinPod as an software duplicate, PinApp inherits all of the integrations and finest practices about software program supply from PinPod. Because of the federation management aircraft, PinApp affords a set of built-in orchestration capabilities to satisfy frequent distributed software administration necessities, which incorporates zone-based rollouts and balancing zonal capability. PinApp helps the performance provided by Kubernetes native primitives equivalent to Deployments and ReplicaSets but additionally contains extensions like deployment semantics to fulfill enterprise wants and improve manageability.

PinScaler is an abstraction that helps software auto scaling at Pinterest. It’s built-in with Statsboard, Pinterest’s native metrics dashboard, permitting customers to configure application-level metrics with desired thresholds to set off scaling together with scaling safeguards, equivalent to a calm down window and duplicate min/max limitations. PinScaler helps easy scaling with CPU and reminiscence metrics, in addition to scheduled scaling and customized metrics to assist varied manufacturing eventualities.

Determine 4: PinCompute Primitives: PinPod, PinApp, and PinScaler. PinPod operates as an unbiased workload, and likewise a reusable constructing block for the higher-order primitive PinApp. PinScaler mechanically scales PinApp.

Returning to the larger image, PinCompute leverages the following technology primitives (PinPod, PinApp, PinScaler), constructing blocks from native Kubernetes and open supply communities, together with deep integrations with federation structure to offer the next classes of use circumstances:

(1) Normal objective compute and repair deployment: That is dealt with by PinCompute’s new primitive sorts. PinApp and PinScaler assist long-running stateless providers deploy and scale shortly. PinPod features as a normal objective compute unit and is at the moment serving Jupyter Pocket book for Pinterest builders.

(2) Run-to-finish jobs: PinterestJobSet leverages Jobs to offer customers a mechanism to execute run-to-finish, framework-less parallel processings; PinterestTrainingJob leverages TFJob and PyTorchJob from the Kubeflow neighborhood for distributed coaching; PinterestCronJob leverages CronJob to execute scheduled jobs primarily based on cron expressions.

(3) Infrastructure providers: Now we have PinterestDaemon leveraging DaemonSet, and a proprietary PinterestSideCar to assist completely different deploy modes of infrastructure providers. Parts which might be capable of be shared by a number of tenants (e.g. logging agent, metrics agent, configuration deployment agent) are deployed as PinterestDaemons, which ensures one copy per node, shared by all Pods on that node. These that can not be shared will leverage PinterestSideCar and can be deployed as sidecar containers inside person Pods.

The PinCompute primitives allow Pinterest builders to delegate infrastructure administration and the related issues of troubleshooting and operations, permitting them to focus on evolving enterprise logics to higher serve Pinners.

Customers entry PinCompute primitives through PinCompute’s Platform Interfaces, which consists of an API layer, a consumer layer for the APIs, and the underlying providers/storages that assist these APIs.

Determine 5: Excessive degree structure of PinCompute Platform Interface layer

PinCompute API

PinCompute API is a gateway for customers to entry the platform. It offers three teams of APIs: workload APIs, operation APIs, and perception APIs. Workload APIs accommodates strategies to carry out CRUD actions on compute workloads, debugging APIs present mechanisms equivalent to stream logs or open container shells to troubleshoot dwell workloads, and perception APIs present customers with runtime data equivalent to software state change and system inside occasions to assist customers to know the state of their current and previous workloads.

Why PinCompute API

Introducing PinCompute API on high of uncooked Kubernetes APIs has many advantages. First, as PinCompute federates many Kubernetes clusters, PinCompute API integrates person requests with federation and aggregates cross-cluster data to type a holistic user-side view of the compute platform. Second, PinCompute API accesses Kubernetes API effectively. For instance, it accommodates a caching layer to serve learn APIs effectively, which offloads costly listing and question API calls from Kubernetes API server. Lastly, as a gateway service, PinCompute API ensures uniformed person expertise when accessing completely different PinCompute backend providers equivalent to Kubernetes, node service, insights service, venture governance providers, and many others.

Determine 6: PinCompute API information circulation

Integrating With Pinterest Infrastructure

This layer incorporates Pinterest’s infrastructure capabilities like fee limiting and safety practices to simplify the Kubernetes API utilization and supply a steady interface for our API customers and builders. The PinCompute API implements fee limiting mechanisms to make sure honest useful resource utilization leveraging our Site visitors workforce’s fee limiting sidecar, benefiting from reusable Pinterest elements. PinCompute API can also be absolutely built-in with Pinterest’s proprietary safety primitives to make sure authentication, authorization, and auditing to comply with paved paths. Such integration permits us to offer Pinterest builders with unified entry management expertise with granularity at API name and API useful resource degree. These integrations are essential for PinCompute APIs to be dependable, safe, and compliant.

Enhanced API Semantics

PinCompute API offers enhanced API semantics on high of the Kubernetes API to enhance the person expertise. One necessary enhancement PinCompute API does is that it presents the uncooked Kubernetes information mannequin in a simplified means with solely data related to constructing software program at Pinterest, which not solely reduces the infrastructure studying curve for builders who deal with constructing excessive degree software logics, but additionally improved information effectivity for API serving. For instance, eradicating managed fields will scale back as much as 50% information dimension for PinCompute API calls. We additionally designed the APIs in a means that’s extra descriptive to be used circumstances equivalent to pause, cease, restart-container, and many others., that are intuitive and straightforward to make use of in lots of eventualities. PinCompute offers OpenAPI documentation and auto generated purchasers, documentation and SDKs to assist customers self-serve constructing purposes on PinCompute.

PinCompute SDK

We strategically spend money on constructing an SDK for purchasers to standardize entry to PinCompute. With the SDK, we’re capable of encapsulate finest practices equivalent to error dealing with, retry with backoff, logging, and metrics as reusable constructing blocks, and guarantee these finest practices are at all times utilized to a consumer. We additionally publish and handle versioned SDKs with clear steerage on how one can develop on high of the SDK. We carefully work with our customers to make sure the adoption of the most recent and best variations of the SDK for optimized interactions with PinCompute.

Useful resource Mannequin

PinCompute helps three useful resource tiers: Reserved, OnDemand, and Preemptible. Customers outline the useful resource quota of their tasks for every tier. Reserved tier quotas are backed by a fixed-size useful resource pool and a devoted workload scheduling queue, which ensures scheduling throughput and capability availability. OnDemand tier quotas leverage a globally shared, and dynamically sized useful resource pool, serving workloads in a first-come, first-serve method. Preemptible tier is being developed to make opportunistic utilization of unused Reserved tier and OnDemand tier capability, which might get reclaimed when wanted by their corresponding tiers. PinCompute clusters are additionally provisioned with a buffer area consisting of energetic however unused assets to accommodate workload bursts. The next diagram illustrates the useful resource mannequin of PinCompute.

Determine 7: PinCompute useful resource mannequin

Scheduling Structure

PinCompute consists of two layers of scheduling mechanisms to make sure efficient workload placements. Cluster degree scheduling is carried out in PinCompute’s regional federation management aircraft. Cluster degree scheduling takes a workload and picks a number of member clusters for execution. Throughout cluster degree scheduling, the workload is first handed by means of a bunch of filters that filter out clusters that can’t match, after which leverage a bunch of rating calculators to rank candidate clusters. Cluster degree scheduling ensures excessive degree placement technique and assets necessities are glad, and likewise takes elements equivalent to load distribution, cluster well being, and many others., into consideration to carry out regional optimizations. Node degree scheduling occurs inside member clusters, the place workloads are transformed to Pods by the corresponding operators. After Pods are created, a Pod scheduler is used to position Pods onto nodes for execution. PinCompute’s Pod scheduler leverages Kubernetes’s scheduler framework, with a mix of upstream and proprietary plugins to make sure the scheduler helps all options out there in open supply Kubernetes, however on the similar time is optimized to PinCompute’s particular necessities.

Determine 8: PinCompute scheduling structure

PinCompute Price Effectivity

Price effectivity is essential to PinCompute. Now we have enacted varied strategies to efficiently drive down PinCompute infrastructure price with out compromising on the person expertise.

We promote multi-tenancy utilization by eliminating pointless useful resource reservation and migrating person workloads to the on-demand useful resource pool that’s shared throughout the federated surroundings. We collaborated with main platform customers to smoothen their workload submission sample to keep away from oversubscription in assets. We additionally began a platform-level initiative to change GPU utilization from P4 household situations to the cost-performant alternate options (i.e. G5 household). The next diagram demonstrates the development of PinCompute GPU price vs. capability, the place we efficiently diminished price whereas supporting the rising enterprise.

Determine 9: PinCompute GPU price vs. capability

Transferring ahead, there are a number of on-going tasks in PinCompute to additional improve price effectivity. 1) We are going to introduce preemptable workloads to encourage extra versatile useful resource sharing. 2) We are going to improve the platform useful resource tiering and workload queueing mechanisms to make smarter choices with balanced tradeoff on equity and effectivity when scheduling person workloads.

Node structure is a essential area the place we invested closely to make sure purposes are capable of run on a containerized, multi-tenanted surroundings securely, reliably, and effectively.

Determine 10: Excessive degree structure of PinCompute Node and infrastructure integrations

Pod in PinCompute

Pod is designed to isolate tenants on the node. When a Pod is launched, it’s granted its personal community id, safety principal, and useful resource isolation boundary atomically, that are immutable throughout a Pod’s lifecycle.

When defining containers inside a Pod, customers can specify two lifecycle choices: predominant container and sidecar container. Predominant containers will honor Pod degree restart coverage, whereas sidecar containers are ensured to be out there so long as predominant containers have to run. As well as, customers can allow begin up and termination ordering between sidecar and predominant containers. Pod in PinCompute additionally helps per container replace, with which containers could be restarted with new spec in a Pod with out requiring the Pod to be terminated and launched once more. Sidecar container lifecycle and per container replace are essential options for batch job execution reliability, and repair deployment effectivity.

PinCompute has a proprietary networking plugin to assist a wide range of container networking necessities. Host community is reserved for system purposes solely. “Bridge Port” assigns a node-local, non-routable IP to Pods that don’t have to serve visitors. For Pods that have to serve visitors, we offer “Routable IP” allotted from a shared community interface, or Pod can request a “Devoted ENI” for full community segmentation. Community assets equivalent to ENI and IP allocations are holistically managed by means of cloud useful resource management aircraft, which ensures administration effectively.

PinCompute helps a wide range of volumes together with EmptyDir, EBS, and EFS. Particularly, we’ve got a proprietary quantity plugin for logging, which integrates with in-house logging pipelines to make sure environment friendly and dependable log collections.

Integrating With Pinterest Infrastructure

PinCompute node accommodates essential integration factors between person containers and Pinterest’s infrastructure ecosystem, specifically, safety, visitors, configuration, logging and observability. These capabilities have unbiased management planes which might be orthogonal to PinCompute, and subsequently usually are not restricted to any “Kubernetes cluster” boundary.

Infrastructure capabilities are deployed in three manners: host-level daemon, sidecar container, or with a twin mode. Daemons are shared by all Pods working on the node. Logging, metrics, and configuration propagation are deployed as daemons, as they don’t have to leverage Pod’s tenancy or keep within the essential information paths of the purposes working within the Pod. Sidecar containers function inside Pod’s tenancy and are leveraged by capabilities that depend on Pod’s tenancy or want efficiency ensures equivalent to visitors and safety.

Consumer containers work together with infrastructure capabilities equivalent to logging, configuration, service discovery by means of file system sharing, and capabilities equivalent to visitors and metrics by means of networking (native host or unix area socket). Pod, together with the tenancy definition we’ve got, ensures varied infrastructure capabilities could be built-in in a safe and efficient method.

Enhanced Operability

PinCompute node has a proprietary node administration system that enhances visibility and operability of nodes. It accommodates node degree probing mechanisms to ship supplementary indicators for node well being which covers areas equivalent to container runtime, DNS, units, varied daemons, and many others. These indicators function a node readiness gate to make sure new nodes are schedulable solely in any case capabilities are prepared, and are additionally used throughout software runtime to help automation and debugging. As a part of node high quality of service (QoS), when a node is marked for reserved tier workloads, it could possibly present enhanced QoS administration equivalent to configuration pre-downloading or container picture cache refresh. Node additionally exposes runtime APIs equivalent to container shells and dwell log streaming to assist customers troubleshoot their workloads.

Determine 11: PinCompute’s proprietary node administration system

Prioritizing Automation

Automation has a big return on funding in terms of minimizing human error and boosting productiveness. PinCompute integrates a spread of proprietary providers geared toward streamlining day by day operations.

Automated Remediation

Operators are sometimes troubled with trivial node well being points. PinCompute is supplied to self-remediate these points with an computerized remediation service. Well being probes working on the Node Supervisor detect node problems and mark them through particular sign annotations. This sign is monitored and interpreted into actions. Then the remediation service executes actions equivalent to cordoning or terminating. The elements for detection, monitoring, and the remediation service align with rules of decoupling and extensibility. Moreover, deliberate fee limiting and circuit-breaking mechanisms are established offering a scientific strategy to node well being administration.

Determine 12: PinCompute Automated Remediation Structure

Software Conscious Cluster Rotation

The first operate of the PinCompute Improve service is to facilitate the rotations of Kubernetes clusters in a safe, absolutely automated method whereas adhering to each PinCompute platform SLOs and person agreements regarding rotation protocol and swish termination. When processing cluster rotation, issues vary from the sequence of rotating various kinds of nodes, simultaneous rotations of nodes, nodes rotated in parallel or individually, and the particular timings of node rotations. Such issues come up from the varied nature of person workloads working on the PinCompute platform. By way of the PinCompute Improve service, platform operators can explicitly dictate how they want cluster rotations to be carried out. This configuration permits for a fastidiously managed computerized development.

Launch PinCompute

Platform Verification

The PinCompute launch pipeline is constituted by 4 levels, every of them being a person federated surroundings. Modifications are deployed by means of levels and verified earlier than selling. An end-to-end check framework operates repeatedly on PinCompute to authenticate platform accuracy. This framework emulates a real person, and features as a continuing canary to supervise the platform’s correctness.

Determine 13: PinCompute Launch Process

Machine Picture (AMI) Administration

PinCompute selectively affords a finite set of node sorts, bearing in mind person wants of {hardware} households, manageability and cost-effectiveness. The AMIs answerable for bootstrapping these nodes fall into three classes: general-purpose AMIs, machine studying centered AMI, and a customizable AMI. The idea of inheriting from a dad or mum AMI and configuration simplifies their administration significantly. Every AMI is tagged in line with kind and model, and so they make the most of the Improve service to provoke computerized deployments.

Operation and Consumer Dealing with Instruments

In PinCompute, we offer a set of instruments for platform customers and directors to simply function the platform and the workloads working on it. We constructed a live-debugging system to offer finish customers with UI-based container shells to debug inside their Pods, in addition to stream console logs and file-based logs to know the progress of their working purposes. This device leverages proprietary node degree APIs to decouple person debugging from essential management paths equivalent to Kubernetes API and Kubelet, and ensures failure isolation and scalability. Self-service venture administration together with step-by-step tutorials additionally diminished person’s overhead to onboard new tasks or make changes of properties of current tasks equivalent to useful resource quota. PinCompute’s cluster administration system offers an interactive mechanism for modifying cluster attributes which makes it helpful to iterate new hardwares or regulate capability settings. The simple-to-use device chains guarantee environment friendly and scalable operations and over the time vastly improved person experiences of the platform.

PinCompute is designed to assist the compute necessities at Pinterest scale. Scalability is a fancy objective to attain, and to us, every of PinCompute’s Kubernetes cluster is optimized in the direction of a candy spot with 3000 nodes, 120k pods, and 1000 mutating pod operations per minute, with a 25sec P99 workload finish to finish launch latency. These scaling targets are outlined by the necessities of most purposes at Pinterest, and are outcomes of balancing throughout a number of elements together with cluster dimension, workload agility, operability, blast radius and effectivity. This scaling goal makes every Kubernetes cluster a stable constructing block for general compute, and PinCompute’s structure can horizontally scale by including extra member clusters to make sure sufficient scalability for the continual development of PinCompute footprint.

PinCompute defines its SLOs in two kinds: API availability and platform responsiveness. PinCompute ensures 99.9% availability of its essential workload orchestration associated APIs. PinCompute affords SLO in management aircraft reconcile latency which focuses on the latency for the system to take motion. Such latency varies from seconds to 10s seconds primarily based on workload complexity and corresponding enterprise necessities. For reserved tier high quality of service, PinCompute offers SLO for workload finish to finish launch velocity, which doesn’t solely deal with platform’s taking motion, but additionally contains how briskly such actions can take impact. These SLOs are necessary indicators for platform degree efficiency and availability, and likewise units excessive requirements for platform builders to iterate platform capabilities with top quality.

Over the previous few years, we’ve got matured the platform each in its structure in addition to a set of capabilities Pinterest requires. Introducing compute as Platform as a Service (PaaS) has been seen as the most important win for Pinterest builders. An inside analysis confirmed that > 90% use circumstances with > 60% infrastructure footprint can profit from leveraging a PaaS to iterate their software program. As platform customers, PaaS abstracts away the undifferentiated heavy lifting of proudly owning and managing infrastructure and Kubernetes, and permits them to deal with the distinctive points of their purposes. As platform operators, PaaS permits holistic infrastructure administration by means of standardization, which offers alternatives to reinforce effectivity and scale back the price of conserving infrastructure up-to-date. PinCompute embraces “API First” which defines a crisp assist contract and makes the platform programmable and extendable. Furthermore, a stable definition of “tenancy” within the platform establishes clear boundaries throughout use circumstances and their interactions with infrastructure capabilities, which is essential to the success of a multi-tenanted platform. Final however not least, by doubling down on automation, we had been capable of enhance assist response time and scale back workforce KTLO and on-call overhead.

There are numerous thrilling alternatives as PinCompute retains rising its footprint in Pinterest. Useful resource administration and effectivity is an enormous space we’re engaged on; tasks equivalent to multi-tenant price attribution, environment friendly bin packing, autoscaling and capability forecast are essential to assist an environment friendly and accountable infrastructure in Pinterest. Orchestrating stateful purposes is each technically difficult and necessary to Pinterest enterprise, and whereas PinPod and PinApp are offering stable foundations to orchestrate purposes, we’re actively working with stakeholders of stateful techniques on shareable options to enhance operational effectivity and scale back upkeep prices. We additionally acknowledge the significance of use circumstances with the ability to entry Kubernetes API. As Kubernetes and its communities are actively evolving, it’s a massive profit to comply with trade tendencies and undertake trade customary practices, and subsequently we’re actively working with accomplice groups and distributors to allow extra Pinterest builders to take action. In the meantime, we’re engaged on contributing again to the neighborhood, as we imagine a broadly trusted neighborhood is the most effective platform to construct a shared understanding, contribute options and enhancements, and share and take up wins and learnings in manufacturing for the nice of all. Lastly, we’re evaluating alternatives to leverage managed providers to additional offload infrastructure administration to our cloud supplier.

It has been a multi-year effort to evolve PinCompute to allow a number of use circumstances throughout Pinterest. We’d prefer to acknowledge the next groups and people who carefully labored with us in constructing, iterating, productizing, and bettering PinCompute:

  • ML Platform: Karthik Anantha Padmanabhan, Chia-Wei Chen
  • Workflow Platform: Evan Li, Dinghang Yu
  • On-line Methods: Ping Jin, Zhihuang Chen
  • App Basis: Yen-Wei Liu, Alice Yang
  • Adverts Supply Infra: Huiqing Zhou
  • Site visitors Engineering: Scott Beardsley, James Fish, Tian Zhao
  • Observability: Nomy Abbas, Brian Overstreet, Wei Zhu, Kayla Lin
  • Steady Supply Platform: Naga Bharath Kumar Mayakuntla, Trent Robbins, Mitch Goodman
  • Platform Safety: Cedric Staub, Jeremy Krach
  • TPM — Governance and Platforms: Anthony Suarez, Svetlana Vaz Menezes Pereira

To be taught extra about engineering at Pinterest, take a look at the remainder of our Engineering Weblog and go to our Pinterest Labs site. To discover and apply to open roles, go to our Careers web page.