OpenAI dropped gpt-image-2 yesterday. It’s now by far the most capable image model on the market. Better instruction following, stronger text rendering, world awareness. I use n8n every day for AI workflows and agent-building, so naturally I wanted this in n8n, like, yesterday…
The official OpenAI node doesn’t support it yet. That’s expected — new models take time to land in n8n nodes, but waiting around for that to happen is optional. The HTTP node exists for exactly this reason.
Here’s the four-node workflow I built as a stopgap.
What it does
Manual trigger → HTTP POST to the OpenAI Image API → extract the base64 string → convert to binary PNG. That’s it. By the end, you have a proper binary file you can pipe into any downstream node: write to disk, upload to S3, attach to an email, whatever.
The model is gpt-image-2, quality set to low. Medium is the right default for most actual use cases — good enough for most things, cheap enough for high volume ($0.053 per 1024×1024 image). You can flip it to high when you need final-asset quality, or low for fast drafts and the prompt engineering phase (use chat itself for most of that though).
Setting it up
1. Import the workflow (copy and paste from below)
Copy the JSON below and paste it onto the n8n canvas.
2. Configure the OpenAI credential
If you haven’t used OpenAI via the HTTP node before you might need to open the request node and create a new Header Auth credential. Set the name to Authorization and the value to Bearer YOUR_OPENAI_API_KEY. The workflow is pre-wired to look for one called OpenAI Header Auth. If you already have one, it should set it for you. If you have multiple, pick the one you want.
2. Swap in your prompt
Open the Generate Image (gpt-image-2) node and find the JSON body. Change the prompt field to whatever you want to generate. The default is “A photorealistic image of a family laughing together on a couch, bathed in the warm glow of a television, perfectly targeted ads playing in the background” (I work in TV advertising. Sue me.)
Obviously you’ll want to make this dynamic and plug it into whatever workflow it is you want to build this into!
3. Run it
Hit the manual trigger. The final node outputs a binary item with the image attached under the key image. From there, connect it to anything.
A Note on the API
gpt-image-2 differs from previous GPT Image models in one important way: it uses output tokens for pricing instead of fixed per-image rates. That means larger or higher-quality images cost more — but the pricing is more predictable if you know your size and quality ahead of time. The OpenAI pricing page has a calculator.
One limitation worth knowing: gpt-image-2 doesn’t currently support transparent backgrounds. If you need transparency, gpt-image-1 is still your option.
Just until the official node arrives
This workflow will keep working even after the native n8n OpenAI node adds support — but at that point I’ll migrate and so should you. The native node will give you more control and will stay up-to-date with the latest options & APIs. This HTTP approach is fast to set up and easy to reason about, but it’s not how you’d want to run this in production long-term.
For now, it works. Ship it, use it, and swap it out when the better option exists.
The copy & paste workflow
{
"nodes": [
{
"parameters": {},
"id": "526dc07a-d279-4ff6-9572-3e3dbb834cbf",
"name": "Manual Trigger",
"type": "n8n-nodes-base.manualTrigger",
"typeVersion": 1,
"position": [
0,
0
]
},
{
"parameters": {
"method": "POST",
"url": "https://api.openai.com/v1/images/generations",
"authentication": "predefinedCredentialType",
"nodeCredentialType": "openAiApi",
"sendBody": true,
"specifyBody": "json",
"jsonBody": "{\n \"model\":\"gpt-image-2\",\n \"quality\":\"low\",\n \"prompt\":\"A photorealistic image of a family laughing together on a couch, bathed in the warm glow of a television, perfectly targeted ads playing in the background\"\n}",
"options": {}
},
"id": "6f2665ca-e298-4d59-a97c-f75c614eed4c",
"name": "Generate Image (gpt-image-2)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.3,
"position": [
304,
0
],
"credentials": {
"openAiApi": {
"id": "OKt7V21fcv11ZGwI",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"operation": "toBinary",
"sourceProperty": "image_b64",
"binaryPropertyName": "image",
"options": {}
},
"id": "60038c49-85fb-42b7-a29d-0f2900e3ca65",
"name": "Convert to PNG",
"type": "n8n-nodes-base.convertToFile",
"typeVersion": 1.1,
"position": [
912,
0
]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "1",
"name": "image_b64",
"value": "={{ $json.data[0].b64_json }}",
"type": "string"
}
]
},
"options": {}
},
"id": "6fff7ae2-b5c5-4611-92ba-a51fb9abdaed",
"name": "Extract Base64",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [
608,
0
]
}
],
"connections": {
"Manual Trigger": {
"main": [
[
{
"node": "Generate Image (gpt-image-2)",
"type": "main",
"index": 0
}
]
]
},
"Generate Image (gpt-image-2)": {
"main": [
[
{
"node": "Extract Base64",
"type": "main",
"index": 0
}
]
]
},
"Extract Base64": {
"main": [
[
{
"node": "Convert to PNG",
"type": "main",
"index": 0
}
]
]
}
},
"pinData": {},
"meta": {
"instanceId": "a463bbe617621da70d36da84cf7e7266cd9748bbc4cb278db0b4ee1ef013821a"
}
}