20k applications(CRD) in ArgoCD overloads kube apiserver #16212
Unanswered
snehagupta-coder
asked this question in
Q&A
Replies: 2 comments
-
any update? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Witnessing same USAGE on both the APISERVER and out Git host.
argocd-cmd-params-cm data:
redis.server: argocd-redis-ha-haproxy:6379
redis.compression: 'none'
applicationsetcontroller.enable.scm.providers: 'false'
applicationsetcontroller.repo.server.timeout.seconds: "120"
applicationsetcontroller.concurrent.reconciliations.max: "20"
applicationsetcontroller.log.format: "json"
applicationsetcontroller.log.level: "info"
notificationscontroller.log.format: "json"
notificationscontroller.log.level: "info"
controller.log.format: "json"
controller.log.level: "warn"
controller.status.processors: "100"
controller.operation.processors: "50"
controller.sharding.algorithm: round-robin
controller.app.state.cache.expiration: "24h0m0s"
controller.resource.health.persist: "false"
controller.cluster.cache.batch.events.processing: "true"
controller.cluster.cache.events.processing.interval: "1s"
controller.cluster.cache.resync: "1h"
server.log.format: "json"
server.log.level: "warn"
server.connection.status.cache.expiration: "15m"
server.app.state.cache.expiration: "30m"
reposerver.log.format: "json"
reposerver.log.level: "warn"
reposerver.default.cache.expiration: "24h"
reposerver.repo.cache.expiration: "24h"
reposerver.git.request.timeout: "60s" any idea to optimize both ApiServer and Git repositories via redis cache optimization and ArgoCD spam reduction ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I have a question regarding the recommended architecture to be adopted for ArgoCD.
My company is using ArgoCD extensively for application deployment. We currently host ~350 ArgoCD instances on single Kubernetes cluster with ~80 worker nodes. These instances are deploying ~160 real world applications to ~140 remote clusters. This maps to around ~20k applications (CRD) managed and synced every three mins by ArgoCD.
Due to this high CPU usage is observed on kube apiserver going above 30 CPU constantly.
The highest number of Kube API calls observed are for secrets, appprojects and applications.
{resource="secrets", verb="LIST"}
{group="argoproj.io", resource="appprojects", verb="GET"}
{group="argoproj.io", resource="applications", verb="PATCH"}
{resource="secrets", verb="GET"}
Currently we have reduced sync time from 3mins to 30 mins which reduces the load on kube apiserver a little.
But we expect increased usage of ArgoCD in future and total number of ArgoCD applications (CRD) to increase 25 times to around 500k of course managed by multiple ArgoCD instances but hosted on same Kubernetes cluster. This increase will be there as more applications start using ArgoCD and start deploying more services and on more clusters. As a result, it will be a huge load for a single Kubernetes cluster hosting ArgoCD. We are looking for some suggestions from the community regarding below:
Any input in this area will be of great help. Thank you in advance and looking forward to suggestions.
Beta Was this translation helpful? Give feedback.
All reactions