Key concepts
Understand the mental model behind connectors — types, lifecycle, configurations, and the Container API sidecar.
Before you build anything, it helps to understand how the pieces fit together. This page covers the five things every connector developer needs to know.
What is a connector?
A connector is a CLI application, packaged as a Docker image, that moves data in or out of the Productsup platform. Connectors let Productsup connect to external systems — importing product data from sources like APIs, files, or databases, and exporting processed data to channels like Amazon, Google Merchant Center, or Facebook Dynamic Ads.
At runtime, your connector runs inside a Docker container alongside a sidecar called the Container API. Your code communicates with the sidecar over HTTP to read input data, write output data, log messages, and access storage.
Your connector's job is straightforward:
- Receive configuration (API keys, URLs, etc.) via environment variables
- Do its work — fetch data from a third-party API, export products, transform data, etc.
- Use the Container API to read/write data and send logs
- Exit with code
0(success) or1–254(failure — any non-zero exit code signals an error)
Docker requirements
Your connector code is wrapped in a Docker image that the CDE builds from your Git repository. Keep in mind:
- You can use any base image available on Docker Hub
- All required runtimes must be available inside the image
- Dockerfiles must end with a
CMDinstruction —ENTRYPOINTis not supported - The CDE API overrides the command at runtime using your application config
Where connectors run in the platform
Productsup organizes data in three levels: Account → Project → Site. A site stores product data and runs the import → process → export pipeline. Connectors plug into this pipeline.
- External data source — a company stores product data in a third-party system or file
- Data source added — an end-user adds a data source to a site in Productsup, telling the platform where and how to access the data
- Import — during a site run, a datasource connector fetches raw data, flattens it, and sends it to Productsup via the Container API
- Merge and process — the platform merges data from all data sources added to the site and processes it according to site rules (filtering, mapping, transformations)
- Site stores processed data — the processed data is stored for the current run only; any new full site run deletes it and starts fresh
- Export added — an end-user adds an export channel to the site, telling the platform where to send the processed data
- Export — an export or export-delta connector reads the processed data and sends it to a third-party channel
- Re-export — the platform can re-export stored data as many times as needed without reimporting, until a new full site run begins
If you're building a datasource connector, your code runs at step 3. If you're building an export or export-delta connector, your code runs at steps 7 and 8.
Productsup never stores raw product data. Each full site run deletes the previous run's data and reimports from scratch. Delta exports track changes between runs using metadata.
Connectors execute during site runs, which can be manual or scheduled. Every run gets a unique process ID you can use for monitoring and debugging.
Connector types
Every connector has a type that determines its role in the data pipeline. You choose the type when creating a connector — it cannot be changed later.
| Type | Direction | What it does |
|---|---|---|
datasource | Inbound | Downloads data from an external data source (API, file, database) and imports it into Productsup via the Container API |
export | Outbound | Gathers all processed product data in a site and sends it to a third-party export channel every run. Uses a single input type: full |
export-delta | Outbound | Gathers only the data that changed since the last site run and sends it to a third-party export channel. Uses four input types: new, modified, unchanged, deleted |
data-service | Transform | Transforms data within the pipeline — validates, enriches, or reshapes product data |
download | Inbound | Downloads files via HTTP(S) to exchange storage for further processing |
transform | Transform | Reads files from exchange storage, parses and transforms them, and writes product data to output |
All connector types use Docker and the Container API. The most common types are datasource, export, and export-delta. The download and transform types often work together — download fetches a file, transform parses it.
Export vs. export-delta: An export connector receives all products every run — use it when the third-party API expects a full product feed each time. An export-delta connector receives only the products that changed since the last run, split into four streams:
- new — products added to the site since the last run
- modified — products changed since the last run
- unchanged — products with no changes since the last run
- deleted — products removed since the last run
Use export-delta when the third-party system supports incremental updates. Your connector code must match its type — an export connector requesting the new input type, or an export-delta connector requesting full, will cause an error.
Both export and export-delta connectors should write feedback files as output. Feedback captures export failures (invalid data, auth issues, unavailable servers) and gets imported as an additional data source on the next site run, letting end-users troubleshoot failed exports.
For a detailed comparison, see Connector types.
The connector lifecycle
A connector version moves through a series of states from creation to production. You trigger these transitions through the Dev Portal.
Created → Updating → Updated → Building → Built
│
▼
SynchronizingToDev → SynchronizedToDev
│
▼
SynchronizingToProd → SynchronizedToProd| State | What it means |
|---|---|
Created | Connector version exists but has no configuration yet |
Updating | Some configurations exist but are not sufficient to build |
Updated | All required configuration is in place — ready to build |
Building | Docker image is being built from your Git repository |
Built | Docker image built successfully |
SynchronizingToDev | Image is being deployed to the dev environment |
SynchronizedToDev | Running on dev — you can test it on the Productsup platform |
SynchronizingToProd | Image is being deployed to production |
SynchronizedToProd | Live in production — you can enable access for users or request a global release |
Typical workflow
New connector: Create → Configure (reach Updated) → Build → Sync to dev → Test → Sync to prod → Enable access or release
Updating an existing connector: Change code or config → Rebuild → Sync to dev → Test → Sync to prod. The current production version stays live until the new sync completes. After a successful sync, all users automatically switch to the new version.
Enabling access vs. releasing
Once your connector is synchronized to production, you have two ways to make it available:
- Enable access — makes the connector available to specific accounts, projects, or sites. Use this for connectors intended for specific customers or internal use. Note that only export and export-delta connectors can be assigned to production sites.
- Release — makes the connector available globally on the platform for everyone. Use this when the connector is ready for general use.
If a build fails, check the build logs in the Dev Portal to diagnose the issue. You must trigger a new build every time code or configuration changes.
To keep multiple production versions, create a new connector version. Each version has its own configuration, dev environment, and prod environment, independent of other versions.
How configurations reach your code
When you set up a connector in the Dev Portal, you define individual configuration fields — form fields that end-users fill in when they use your connector (for example, an API key or a base URL).
At runtime, these values are passed to your connector depending on the execution mode:
as-env-options (recommended)
Individual configurations are passed as environment variables using SNAKE_CASE naming:
FIRST_OPTION=value submitted by user
SECOND_OPTION=value submitted by useras-command-options
Individual configurations are passed as CLI flags appended to your command:
[command] [arguments] --first-option='value submitted by user' --second-option='value submitted by user'You choose the execution mode when creating a connector or connector version. New connectors should use as-env-options — it's simpler and works with any framework.
In your code, you access configuration values through your framework's standard mechanism. For example, in a Symfony-based connector:
App\Service\MyService:
arguments:
$apiKey: '%env(API_KEY)%'
$batchSize: '%env(int:BATCH)%'The Container API sidecar
The Container API is an HTTP server that runs alongside your connector at http://cde-container-api. It is the bridge between your code and the Productsup platform.
Each time your connector runs, it gets a unique Container API instance that is context-aware — it automatically knows which site and account it belongs to, so you can only access data relevant to that specific site.
Your connector calls it to:
- Read input data — for export and export-delta connectors, read the product data that needs to be exported
- Write output data — for datasource connectors, write the products you fetched
- Write feedback — report export results back to the platform (success/failure per product)
- Log messages — send logs visible in the Dev Portal monitoring
- Send notifications — show messages to end-users in the Productsup notification panel
- Write to error log — record errors for debugging
- Access storage — read/write files to persistent buckets, temporary exchange storage, or the transport server
The Container API is only accessible from inside the Productsup infrastructure. You cannot call it from outside.
Using the PHP SDK
While you can call the Container API directly over HTTP, the recommended approach is the PHP SDK:
composer require productsupcom/container-api-clientThe SDK wraps all HTTP calls into a typed interface. Here's what a datasource connector looks like:
use Productsup\CDE\ContainerApi\ContainerApiInterface;
readonly class MyDataSourceService
{
public function __construct(
private ContainerApiInterface $containerApi,
) {}
public function run(): void
{
$products = $this->fetchProducts(); // your logic
$this->containerApi->appendManyToOutputFile($products);
}
}For export connectors, you read input instead of writing output:
foreach ($this->containerApi->yieldFromInputFile() as $product) {
$this->sendToThirdParty($product);
$this->containerApi->appendToFeedbackFile([
'id' => $product['id'],
'status' => 'success',
]);
}For the full Container API reference, see Container API.
Limitations
There are a few platform constraints to design around:
- Log lines — 300 per minute, 7,200 per connector run. Can be raised on request.
- Products per site — 10,000,000 maximum. Can be raised on request.
- Flat data model — the Productsup data model is flat (rows and columns). Your connector must flatten any nested data before writing it to the output.
Next steps
Now that you understand the mental model:
- Quickstart — build and deploy your first datasource connector
- Connector types — deep dive on each type
- Dev Portal — learn the UI for managing connectors
How is this guide?