r/azuredevops

Azure pipeline does not trigger when Pipeline YAML is in different branch
▲ 2 r/azuredevops+1 crossposts

Azure pipeline does not trigger when Pipeline YAML is in different branch

In azure pipelines, I am working on a repo test where 3 branches are there main , develop and ci . This repo is part of Azure Git Repos .
Now my ci branch contains an Azure Pipelines YAML file, and Azure Pipelines is created using that YAML.
Now I want to run an automatic trigger when a PR is raised from develop to main branch.
PLease note that main and develop does not contain pipeline yaml file.

Steps I have followed

  1. Set branch build policy for automatic trigger as mentioned in here Build Validation
  2. Change Pipeline default branch. Here I have set default branch to ci

Even after these settings, the automatic pipeline does not trigger when PR is raised from develop to main branch.

PR refer to pipeline but status stauck at [image below] -

https://preview.redd.it/tvny2xdwrbwg1.png?width=2015&format=png&auto=webp&s=3057648c9f63f4b195b89fcfc2bbd900898af094

Please help if this is possible. If yes, how to achieve this?

reddit.com
u/Ok_Scheme344 — 1 day ago
▲ 7 r/azuredevops+1 crossposts

ADO Pipeline managing github repos using terraform

Is anyone currently using Azure DevOps pipelines to run Terraform for managing GitHub Enterprise repositories?

If so, I’d appreciate insight into how you’ve implemented this. In most cases, we rely on Azure DevOps service connections with appropriate RBAC permissions to handle authentication and access. However, in this scenario, it seems that a service connection only addresses part of the problem.

Are there alternative approaches or best practices I may be overlooking? One option I’m considering is using a GitHub App with the necessary permissions, but I’m interested in how others have approached this.

If you’re currently doing something similar, would you be willing to share details of your Azure DevOps pipeline or point to any existing examples for reference?

reddit.com
u/PizzaSalsa — 2 days ago

Anyone using the Azure Devops Connector for MS 365 Copilot?

There's a lot of nifty things with project management and AI coming out now for JIRA, linear and, and GitHub. ADO seems to be lacking.

I had our IT dept setup this connector:
https://learn.microsoft.com/en-us/microsoft-365/copilot/connectors/azure-devops-work-items-overview

The hope was that Copilot could be used to summarize projects, improve AC, descriptions, suggest tests, etc.

After getting it configured, it turns out that Copilot Chat doesn't use the connector. Instead, you have to also build an agent and add that to Teams. That means I'll need to submit requests to use Copilot Studio, research costs, learn how to build an agent and probably other things. Before I keep going down this rabbit hole, has anyone already used this, and if it's worthwhile?

u/captrespect — 4 days ago
▲ 1 r/azuredevops+1 crossposts

Chapter 4:Learn Kubernetes for beginners

In last Chapter we initialized our first Cluster and learned about #Pods and #YAML deployments, In Chapter 4 I have covered basics of #Networking and #Services within #Kubernetes - how everything communicates within cluster and outside. Let me know what you think about this chapter and keep #LearningTogether.

youtube.com
u/That-Ad8566 — 4 days ago

Gave an LLM an SQL interface to our CI logs, and sharing what we learned

Disclosure up front: I'm a co-founder at Mendral (YC W26). We build an agent that debugs CI failures. Not a pitch, sharing what we learned.

We run around 1.5B CI log lines and 700K jobs per week through ClickHouse for our agent to query. It writes its own SQL, no predefined tool API. The LLM-on-logs angle is covered to death. The CI-specific parts are what I haven't seen discussed much.

1) GitHub's rate limit is hard to deal with.

15K requests per hour per App installation. Continuously polling workflow runs, jobs, steps, and logs across dozens of active repos, while the agent itself also needs to hit the API to pull PR diffs, post comments, and open PRs. A single big commit can spawn hundreds of parallel jobs, each producing logs you need to fetch.

Early on we'd burst, hit the ceiling, fall 30+ minutes behind, and the agent would be reasoning about stale data. Useless if an engineer is staring at a red build right now.

Cap ingestion at ~3 req/s steady and use durable execution (we're on Inngest) so when we hit the limit we read X-RateLimit-Reset, add 10% jitter, and suspend the workflow with full state checkpointed. When the window resets, execution picks up at the exact API call it left off on, so there's no retry logic, no dedup, no idempotency work. The rate limit becomes a pause button. P95 ingestion delay is under 5 minutes, usually seconds.

2) Raw SQL beat a constrained tool.

We started with the usual get_failure_rate(workflow, days), get_logs(job_id), etc. It capped the agent so we switched to raw SQL against a documented schema unlocked investigations we never scripted. Recent models write good ClickHouse SQL because there's a huge amount of it in training data. Median investigation across 52K queries is 4 queries, 335K rows scanned, ~110ms per raw-log query.

3) Clickhouse for storage.

Every log line in our table carries 48 columns of run-level metadata: commit SHA, author, branch, PR title, workflow name, job name, runner info, timestamps. In a row store this is insane. In ClickHouse with ZSTD, commit_message compresses 301:1 because every log line in a run shares the same value. The whole table lands at ~21 bytes per log line on disk including all 48 columns. The real win isn't the disk savings, it's that the agent can filter by any column without a join. When it asks "show me failures on this runner label, in the last 14 days, where the PR author is X," there's no join needed.

Questions:

  • Anyone running an ingestion layer against GitHub Actions (or Buildkite, CircleCI) that has to share API budget with other consumers? How are you splitting it? We ended up keeping ~4K req/hour headroom for the agent and tuning ingestion under 3 req/s. Trial and error.
  • Anyone using columnar stores (ClickHouse, DuckDB, Druid) for CI observability specifically, vs general log platforms (Loki, Elastic)? Tradeoffs?

We made a longer writeup in case it's useful: https://www.mendral.com/blog/llms-are-good-at-sql

u/samalba42 — 4 days ago