Install the Armory Scale Agent in an Existing Spinnaker Instance
cloudriver-local.yml file and use the provided manifests to deploy the service to the same Kubernetes cluster and namespace that Spinnaker is running in.How to get started using the Scale Agent with open source Spinnaker
This guide assumes you want to evaluate the Scale Agent with an existing Spinnaker test instance. With that in mind:
- Your Spinnaker test instance is running in the spinnakernamespace.
- You have Kubernetes accounts configured in Clouddriver so you can evaluate account migration.
- You are going to deploy the Scale Agent service in the same cluster and namespace as your Spinnaker test instance.
The following features require Spinnaker 1.28+ and Clouddriver Account Management:
- Automated scanning for newly created accounts in Clouddriver and migrating those accounts to Scale Agent management
- Intercepting and processing requests sent to Clouddriver’s <GATE-URL>/credentialsendpoint
Objectives
- Meet the prerequisites outlined in the Before you begin section.
- Configure the Clouddriver plugin in your clouddriver-local.ymlfile and deploy using Halyard.
- Learn the options for migrating Clouddriver accounts to the Scale Agent.
- Configure and deploy the Scale Agent service in the cluster and namespace where Spinnaker is running (Spinnaker Service mode).
- Confirm success.
Since this guide is for installing the Armory Scale Agent in a test environment, it does not include mTLS configuration. The Armory Agent service and plugin do not communicate securely.
Before you begin
- You are familiar with how plugins work in Spinnaker. See open source Spinnaker’s Plugin User Guide. 
- You have read the Scale Agent overview. 
- You have configured Clouddriver to use MySQL or PostgreSQL. See the Configure Clouddriver to use a SQL Database guide for instructions. The Scale Agent plugin uses the SQL database to store cache data and dynamically created Kubernetes accounts. 
- For Clouddriver pods, you have mounted a service account with permissions to - listand- watchthe Kubernetes kind- Endpointin the namespace where Clouddriver is running.- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: spin-sa rules: - apiGroups: - "" resources: - endpoints verbs: - list - watch
- Verify that there is a Kubernetes Service with prefix name - spin-clouddriver(configurable) routing HTTP traffic to Clouddriver pods, having a port with name- http(configurable).
- You have at least one Kubernetes cluster to serve as your deployment target cluster. 
- Choose the Scale Agent version that is compatible with your Spinnaker version. - Armory CD (Spinnaker) Version - Scale Agent Plugin Version - Scale Agent Service Version - 2.28.x (1.28.x) - 0.11.56 - 1.0.73 - 1.29.x - 0.12.21 - 1.0.73 - 2.30.x (1.30.x) - 0.13.20 - 1.0.73 - Database compatibility: - MySQL - PostgreSQL - 5.7; AWS Aurora - 10+ 
Install the plugin
Warning
The Scale Agent plugin extends Clouddriver. When Halyard adds a plugin to a Spinnaker installation, it adds the plugin repository information to each service. This means that when you restart Spinnaker, each service restarts, downloads the plugin, and checks if an extension exists for that service. Each service restarting is not ideal for large Spinnaker installations due to service restart times. To avoid each service restarting and downloading the plugin, configure the plugin in Clouddriver’s local profile.This guide show how to install the plugin using a plugin repository. You can also install the plugin from Docker if you want to cache the plugin and run security scans on it before installation.
If you don’t have a Clouddriver local profile, create one in the same directory as the other Halyard configuration files. This is usually ~/.hal/default/profiles on the machine where Halyard is running.
Add the following to your clouddriver-local.yml file:
This code snippet includes enabling Clouddriver Account Management configuration so you can evaluate the Scale Agent’s interceptor and automatic scanning features.
spinnaker:
  extensibility:
    repositories:
      armory-agent-k8s-spinplug-releases:
        enabled: true
        url: https://raw.githubusercontent.com/armory-io/agent-k8s-spinplug-releases/master/repositories.json
    plugins:
      Armory.Kubesvc:
        enabled: true
        version: 0.13.20 # check compatibility matrix for your Armory CD version
        extensions:
          armory.kubesvc:
            enabled: true
    # Plugin config
    kubesvc:
      cluster: kubernetes
spinnaker:
  extensibility:
    repositories:
      armory-agent-k8s-spinplug-releases:
        enabled: true
        url: https://raw.githubusercontent.com/armory-io/agent-k8s-spinplug-releases/master/repositories.json
    plugins:
      Armory.Kubesvc:
        enabled: true
        version: 0.11.32
        extensions:
          armory.kubesvc:
            enabled: true
kubesvc:
  cluster: kubernetes
kubernetes:
  enabled: true
# enable Clouddriver Account Management https://spinnaker.io/docs/setup/other_config/accounts/
account:
  storage:
    enabled: true
This code snippet does not enable Clouddriver Account Management, which is not supported in Spinnaker versions 1.27.x and earlier.
spinnaker:
  extensibility:
    repositories:
      armory-agent-k8s-spinplug-releases:
        enabled: true
        url: https://raw.githubusercontent.com/armory-io/agent-k8s-spinplug-releases/master/repositories.json
    plugins:
      Armory.Kubesvc:
        enabled: true
        version: 0.12.21 # check compatibility matrix for your Spinnaker version
        extensions:
          armory.kubesvc:
            enabled: true
kubesvc:
  cluster: kubernetes
kubernetes:
  enabled: true
Save your file and apply your changes by running hal deploy apply. Kubernetes terminates the existing Clouddriver pod and creates a new one. You can validate plugin installation by executing kubectl -n spinnaker logs deployments/spin-clouddriver | grep "Plugin". Output is similar to:
org.pf4j.AbstractPluginManager      :  Plugin 'Armory.Kubesvc@0.11.32' resolved
org.pf4j.AbstractPluginManager      :  Start plugin 'Armory.Kubesvc@0.11.32'
io.armory.kubesvc.KubesvcPlugin     :  Starting Kubesvc  plugin...
Expose Clouddriver as a LoadBalancer
To expose Clouddriver as a Kubernetes-type LoadBalancer, kubectl apply the following manifest:
apiVersion: v1
kind: Service
metadata:
  namespace: spinnaker
  labels:
    app: spin
    cluster: spin-clouddriver
  name: spin-clouddriver-grpc
spec:
  ports:
    - name: grpc
      port: 9091
      protocol: TCP
      targetPort: 9091
  selector:
    app: spin
    cluster: spin-clouddriver
Various cloud providers may require additional annotations for LoadBalancer. Consult your cloud provider’s documentation.
Apply the manifest using kubectl.
Get the LoadBalancer IP address
Use kubectl get svc spin-clouddriver-grpc -n spinnaker to make note of the LoadBalancer IP external address. You need this address when you configure the Scale Agent service.
Confirm Clouddriver is listening
Use netcat to confirm Clouddriver is listening on port 9091 by executing nc -zv [LB address] 9091. Perform this check from a node in your Spinnaker cluster and one in your target cluster.
Options for migrating accounts
In Spinnaker, you can configure Kubernetes accounts in multiple places:
- Clouddriver configuration files: clouddriver.yml,clouddriver-local.yml,spinnaker.yml,spinnaker-local.yml
- Clouddriver database: clouddriver.accountstable
- Spring Cloud Config Server reading accounts from Git, Vault, or another supported backend
- Plugins
You have the following options for migrating accounts:
- You can configure the Scale Agent service to manage specific accounts by adding those accounts to a ConfigMap. This approach means you should remove the accounts from the Clouddriver credential source before you deploy the service.
- You can dynamically migrate accounts after the service has been deployed. This requires kubectlaccess to the cluster so you can port-forward the endpoint to your local machine.
This guide shows you how to statically add an account to the Scale Agent service configuration before deployment.
Deploy the service using manifests
The Scale Agent service can run with most features on the default ServiceAccount. However, if you want the Scale Agent service to load balance connections or assign a precise Zone ID, the Scale Agent service needs permissions to get Pods, Deployments, ReplicaSets, and Namespaces in your cluster. Rather than modifying the default ServiceAccount permissions, Armory recommends creating a new ServiceAccount, ClusterRole, and ClusterRoleBinding for the Scale Agent.
Configure permissions
The following manifest creates a ServiceAccount, ClusterRole, and ClusterRoleBinding. Apply the manifest in your spinnaker namespace.
# Create agent cluster role
apiVersion: rbac.authorizati
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: scale-agent-cluster-role
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - events
      - ingresses
      - ingresses/status
      - jobs
      - jobs/status
      - namespaces
      - namespaces/finalize
      - namespaces/status
      - pods
      - pods/log
      - pods/status
      - secrets
      - services
      - services/status
      - services/finalizers
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
      - delete
  - apiGroups:
      - batch
    resources:
      - jobs
      - jobs/status
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
      - delete
  - apiGroups:
      - apps
      - extensions
    resources:
      - daemonsets
      - daemonsets/status
      - deployments
      - deployments/finalizers
      - deployments/scale
      - deployments/status
      - replicasets
      - replicasets/finalizers
      - replicasets/scale
      - replicasets/status
      - statefulsets
      - statefulsets/finalizers
      - statefulsets/scale
      - statefulsets/status
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
      - delete
  - apiGroups:
      - monitoring.coreos.com
    resources:
      - servicemonitors
    verbs:
      - get
      - create
  - apiGroups:
      - spinnaker.armory.io
    resources:
      - "*"
      - spinnakerservices
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs:
      - "*"
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - "*"
  - apiGroups:
      - argoproj.io
    resources:
      - "*"
    verbs:
      - "*"
---
# Create agent service account
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: spinnaker
  name: scale-agent-sa
---
# Bind agent cluster role and service account
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: scale-agent-cluster-role-binding
subjects:
  - kind: ServiceAccount
    name: scale-agent-sa
    namespace: spinnaker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: scale-agent-cluster-role
Configure the service
Configure the Armory Scale Agent service using a ConfigMap. In the data section, configure the LoadBalancer and and the Kubernetes account you want the Scale Agent to manage.
Define armory-agent.yml in the data section:
apiVersion: v1
kind: ConfigMap
metadata:
  name: armory-agent-config
  namespace: spinnaker
data:
  armory-agent.yml: |  
Clouddriver plugin LoadBalancer
Replace 
apiVersion: v1
kind: ConfigMap
metadata:
  name: armory-agent-config
  namespace: spinnaker
data:
  armory-agent.yaml: |
    clouddriver:
      grpc: <LoadBalancer-exposed-address>:9091
      insecure: true    
Kubernetes account
Add your Kubernetes account configuration. This account should not exist in Clouddriver.
apiVersion: v1
kind: ConfigMap
metadata:
  name: armory-agent-config
  namespace: spinnaker
data:
  armory-agent.yaml: |
    clouddriver:
      grpc: <LoadBalancer-exposed-address>:9091
      insecure: true
    kubernetes:
     accounts:
     - name:
       kubeconfigFile:
       insecure:
       context:
       oAuthScopes:
       serviceAccount: true
       serviceAccountName: spin-sa
       namespaces: []
       omitNamespaces: []
       onlyNamespacedResources:
       kinds: []
       omitKinds: []
       customResourceDefinitions: [{kind:}]
       metrics:
       permissions: []
       maxResumableResourceAgeMs:
       onlySpinnakerManaged:
       noProxy:    
See the Agent options for field explanations.
Apply the manifest in your spinnaker namespace.
Deploy the Armory Scale Agent service
Apply the following manifest in your spinnaker namespace:
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: spinnaker
  labels:
    app: spin
    app.kubernetes.io/name: armory-agent
    app.kubernetes.io/part-of: spinnaker
    cluster: spin-armory-agent
  name: spin-armory-agent
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spin
      cluster: spin-armory-agent
  template:
    metadata:
      labels:
        app: spin
        app.kubernetes.io/name: armory-agent
        app.kubernetes.io/part-of: spinnaker
        cluster: spin-armory-agent
    spec:
      serviceAccount: scale-agent-sa
      containers:
        - image: armory/agent-k8s:<version> # must be compatible with your Armory CD version
          imagePullPolicy: IfNotPresent
          name: armory-agent
          ports:
            - name: health
              containerPort: 8082
              protocol: TCP
            - name: metrics
              containerPort: 8008
              protocol: TCP
          readinessProbe:
            httpGet:
              port: health
              path: /health
            failureThreshold: 3
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /opt/armory/config
              name: volume-armory-agent-config
          # - mountPath: /kubeconfigfiles
          #   name: volume-armory-agent-kubeconfigs
      restartPolicy: Always
      volumes:
        - name: volume-armory-agent-config
          configMap:
            name: armory-agent-config
      # - name: volume-armory-agent-kubeconfigs
      #   secret:
      #     defaultMode: 420
      #     secretName: kubeconfigs-secret
Verify that the plugin and service are communicating
You can access the Clouddriver log to verify that the plugin is running and communicating with the service.
kubectl -n spinnaker logs deployment/spin-clouddriver | grep -E "Start plugin|Starting Kubesvc plugin|Registering agent with"
Confirm success
Create a pipeline with a Deploy manifest stage. You should see your target cluster available in the Accounts list. Deploy a static manifest.
Uninstall the plugin
Remove the Scale Agent plugin config from clouddriver-local.yml and hal deploy apply the changes.
Uninstall the service
You can use kubectl to delete all Scale Agent service’s Deployment objects and their accompanying ConfigMap and Secret.
What’s next
- Dynamic Accounts Architecture and Features
- Migrate Clouddriver Kubernetes Accounts to the Armory Scale Agent
- Troubleshoot the Armory Scale Agent Service and Plugin page if you run into issues.
Feedback
Was this page helpful?
Thank you for letting us know!
Sorry to hear that. Please tell us how we can improve.
Last modified March 3, 2023: (2d069084)