MCP exploded. There is no better way to say that. Every developer I knew had at least a handful of those MCP servers running with their coding agents. But the context overload with all those was massive and I found myself seeing messages around context overflow a fair few times. I know for a fact that most of the MCP servers I have enabled and the tool definitions they have were not needed to be read by the agent all the time.

So how do I solve this context bloat while ensuring I can give all the necessary information required for the model to give me that desired result? I have been hearing or reading about a few things which could help:

  • Skills ( was a Claude-only thing till it was made an open spec)
  • Powers in kiro ( an IDE based approach and not yet in kiro CLI)
  • Code execution with MCP server ( Maybe for another post)

The curiosity around skills peaked with Anton’s terraform skills and the official HashiCorp agent skills. My experience with using it with kiro CLI is documented here. But how do you create your own skill if one doesn’t exist ? The process of testing and refining a new skill turned into a great learning experience about how these skills work in practice.

So what’s the problem statement?

I need a mechanism to effectively inform my agent to create the desired configuration to help bring existing cloud resources under Terraform management. If you hear, the first thing that comes to mind ( and should ) is import statements or blocks. The terraform-search-import skill I am discussing here aims to address that challenge. I do feel there is a need for No so green field series on infrastructure management.

So how do did end up here ? Most organizations have infrastructure that wasn’t originally created with Terraform:

  • Resources created through AWS Console, CLI, or other tools during some POC phase OR
  • Legacy infrastructure that predates Infrastructure as Code adoption

Traditionally, importing these resources into Terraform required:

  1. Manual discovery - Finding all resources across your cloud accounts
  2. Manual configuration writing - Creating HCL resource blocks for each resource
  3. Manual import commands - Running terraform import for each resource individually
  4. Trial and error - Getting the resource configuration exactly right

This process is time-consuming, error-prone, and doesn’t scale when you have dozens or hundreds of resources. Terraform introduced a step by step improvement on this approach with the config driven imports and then with Terraform search. The intention with the skill is to handle these scenarios based on the user situation and version constraints they may have.

How does it work ?

Ideally, the pre-requisites around the version constraints are met and a user is able to leverage this skill to use Terraform’s new Search functionality to automate the discovery and import:

  1. Automated Discovery: Uses .tfquery.hcl files to query for existing resources. These are provider dependent and not every provider has every list provider possible. AWSCC provider is an exception with support for more than 100s of resources.
  2. Automated Generation: Creates Terraform resource blocks with current configurations automatically
  3. Config driven import method: Generates import blocks using the terraform query subcommands.

I already had documented some instructions which I was using my agents to help import existing resources. All it needed was identifying the agent skills structure and then porting the necessary instructions when needed.

Adding the Skill

For testing, I needed to set up the terraform-search-import skill in my local Kiro environment. Since this was a new skill which doesn’t exist in the upstream package, I had to create it from scratch following the agent skill specification and using existing skills as reference. To be honest, the skills documentation is pretty straightforward when it comes to guiding you on what is expected.

Note : If your SKILL.md file is growing with examples and lots of external references, you should look at adding them under a directory called references. Agent skill specifications are able to load these reference documents in a lazy manner. Only when they are needed. This helps to reduce the context which is made available to the model. Overload of information doesn’t always translate to the best results.

Below were some of the steps specifically for kiro as the CLI references the skill documents as Skill resources.

  1. Create the skill directory structure:

    mkdir -p ~/.kiro/agents/terraform-search-import/references
    
  2. Create the main skill file (SKILL.md) with proper YAML frontmatter:

    ---
    name: terraform-search-import
    description: Discover existing cloud resources using Terraform Search queries and bulk import them into Terraform management. Use when bringing unmanaged infrastructure under Terraform control, auditing cloud resources, or migrating to IaC.
    ---
    
  3. Add any supporting documentation or examples in the references/ directory.

Let’s be honest . Did I start this way ? Not really. I had 500+ lines of information in my original SKILL.md file. One of the redundant items in my original SKILL.md file were the scenarios I wanted to cover.

  • Scenario where the list resource existed for a provider.
    • Follows the logic we outlined above.
  • Where a list resource didn’t exist for the resource in a provider.
    • Gather the information using the Cloud provider’s CLI or API and use the resource identity in a config driven import.
  • Where the Terraform version if below the required 1.14.0 version.
    • Gather the information using the Cloud provider’s CLI or API and use the resource identity in a config driven import.

What list resources are supported ?

How can we solve this ?

  • Well, you could have the Terraform MCP server look for the resources everytime we needed against a provider. OR
  • Look at the supported list resources I maintain as an automated workflow. OR
  • Execute a deterministic script agains the providers in question.

If you look at the Agent Skills specification, there is an additional optional directory a skill can hold.

/scripts

The repository I linked above for my list of provider supported list resources were essentially banking on the terraform providers schema json file. The script in the repository identified the list/action resources for each provider schema we fed into the script. How do we get that ? Add a provider block into a .tf file and run terraform init.

So what if I moved the list resource discovery into a deterministic script ? So the MCP server doesn’t need to do this either.

Testing the Skill in Practice

So we have the following resources in the directory kiro CLI is expecting them to be in :

  • SKILL.md file
  • /references for edge cases and additional scenarios
  • /scripts to hold scripts( if needed)

Copying this into the .kiro/agents/skills directory is all I needed to have kiro-cli identify it as an available skill on the next session. It helps that the agent configuration was using wild characters in the skills resource block.

"resources": ["skill://~/.kiro/skills/**/SKILL.md"]

Actual tests

Lets run through couple of scenarios.

Test 1 : Supported resources and TF version above 1.14.0

With the skill created and installed( or added to the directory ), I tested it with the request “Create the terraform configuration for importing the S3 buckets from my AWS account.”

Here’s the complete session log:

Create the terraform configuration for importing the S3 buckets from my AWS account.
I'll help you create Terraform configuration to import existing S3 buckets from your AWS account. Let me use the terraform-search-import skill to discover and import
your S3 buckets.
Reading file: /Users/user_name/.kiro/skills/terraform-search-import/SKILL.md, all lines (using tool: read)
Allow this action? Use 't' to trust (always allow) this tool for the session. [y/n/t]:

[iac] > t

✓ Successfully read 9116 bytes from /Users/user_name/.kiro/skills/terraform-search-import/SKILL.md

Completed in 0.1s
Now I'll create a minimal Terraform configuration to discover and import your S3 buckets. First, let me check what list resources are available for the AWS provider:
I'll create the following file: /Users/user_name/Documents/2026/experiments/agent-skills/terraform.tf (using tool: write)
Purpose: Create basic Terraform configuration with AWS provider

1: terraform {
2: required_version = ">= 1.14"
3: required_providers {
4: aws = {
5: source = "hashicorp/aws"
6: version = "~> 6.0"
7: }
8: }
9: }
Allow this action? Use 't' to trust (always allow) this tool for the session. [y/n/t]:

[iac] > t

Creating: /Users/user_name/Documents/2026/experiments/agent-skills/terraform.tf

Completed in 0.3s
I will run the following command: cd /Users/user_name/Documents/2026/experiments/agent-skills && terraform init (using tool: shell)
Purpose: Initialize Terraform to download AWS provider

Allow this action? Use 't' to trust (always allow) this tool for the session. [y/n/t]:

[iac] > t

Initializing the backend...
Initializing provider plugins...

Finding hashicorp/aws versions matching "~> 6.0"...
Installing hashicorp/aws v6.31.0...
Installed hashicorp/aws v6.31.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Completed in 25.674s
Now let me check what list resources are available for S3:
I will run the following command: cd /Users/user_name/Documents/2026/experiments/agent-skills && terraform providers schema -json | jq '.provider_schemas."registry.terraform.io/hashicorp/aws".list_resource_schemas | keys | .[]' | grep -i s3 (using tool: shell)
Purpose: Check if S3 bucket list resources are supported

"aws_s3_bucket"
"aws_s3_bucket_acl"
"aws_s3_bucket_policy"
"aws_s3_bucket_public_access_block"
"aws_s3_object"

Completed in 2.69s
Great! The AWS provider supports aws_s3_bucket list resources. Now I'll create the query file to discover your S3 buckets:
I'll create the following file: /Users/user_name/Documents/2026/experiments/agent-skills/discovery.tfquery.hcl (using tool: write)
Purpose: Create Terraform query file to discover all S3 buckets

1: provider "aws" {
2: region = "us-east-1"
3: }
4:
5: list "aws_s3_bucket" "all" {
6: provider = aws
7: }
Creating: /Users/user_name/Documents/2026/experiments/agent-skills/discovery.tfquery.hcl

Completed in 0.1s
Now let's run the query to discover your S3 buckets:
I will run the following command: cd /Users/user_name/Documents/2026/experiments/agent-skills && terraform query (using tool: shell)
Purpose: Execute Terraform query to discover S3 buckets

list.aws_s3_bucket.all account_id=############,bucket=agentcore-cicd-terraform-state-d1f38556,region=us-east-1 agentcore-cicd-terraform-state-d1f38556
list.aws_s3_bucket.all account_id=############,bucket=aws-glue-assets-############-us-east-1,region=us-east-1 aws-glue-assets-############-us-east-1
list.aws_s3_bucket.all account_id=############,bucket=awscc-tool-use-2fqr-assets,region=us-east-1 awscc-tool-use-2fqr-assets
list.aws_s3_bucket.all account_id=############,bucket=bedrock-s3-kb,region=us-east-1 bedrock-s3-kb
list.aws_s3_bucket.all account_id=############,bucket=bedrock-video-generation-us-east-1-ymeqc5,region=us-east-1 bedrock-video-generation-us-east-1-ymeqc5

Completed in 76.725s
Perfect! Now let's generate the Terraform configuration for importing all these S3 buckets:
I will run the following command: cd /Users/user_name/Documents/2026/experiments/agent-skills && terraform query -generate-config-out=s3-buckets.tf (using tool: shell)
Purpose: Generate Terraform configuration for importing S3 buckets

list.aws_s3_bucket.all account_id=############,bucket=agentcore-cicd-terraform-state-d1f38556,region=us-east-1 agentcore-cicd-terraform-state-d1f38556
list.aws_s3_bucket.all account_id=############,bucket=aws-glue-assets-############-us-east-1,region=us-east-1 aws-glue-assets-############-us-east-1
list.aws_s3_bucket.all account_id=############,bucket=awscc-tool-use-2fqr-assets,region=us-east-1 awscc-tool-use-2fqr-assets
list.aws_s3_bucket.all account_id=############,bucket=bedrock-s3-kb,region=us-east-1 bedrock-s3-kb
list.aws_s3_bucket.all account_id=############,bucket=bedrock-video-generation-us-east-1-ymeqc5,region=us-east-1 bedrock-video-generation-us-east-1-ymeqc5
list.aws_s3_bucket.all account_id=############,bucket=cdk-hnb659fds-assets-############-us-east-1,region=us-east-1 cdk-hnb659fds-assets-############-us-east-1
list.aws_s3_bucket.all account_id=############,bucket=cf-templates-1xfr5l1bee3rm-us-east-1,region=us-east-1 cf-templates-1xfr5l1bee3rm-us-east-1

Completed in 57.567s
Let me check the generated configuration file:
Reading file: /Users/user_name/Documents/2026/experiments/agent-skills/s3-buckets.tf, from line 1 to 50 (using tool: read)
✓ Successfully read 1221 bytes from /Users/user_name/Documents/2026/experiments/agent-skills/s3-buckets.tf

Completed in 0.0s
Perfect! I've successfully created a Terraform configuration to import all your S3 buckets. Here's what was generated:

Files Created:
terraform.tf - Provider configuration
discovery.tfquery.hcl - Query file to discover S3 buckets
s3-buckets.tf - Generated configuration with 36 S3 buckets and their import blocks
What's Included:
Resource blocks for all 36 S3 buckets in your account
Import blocks using identity-based import (Terraform 1.12+)
All current bucket configurations including tags, settings, and timeouts
Next Steps:
Review the generated configuration in s3-buckets.tf
Clean up the configuration by removing computed attributes and null values
Run the import:
bash
terraform plan
terraform apply
The configuration includes buckets for various services like:

Terraform state buckets
AWS service buckets (Glue, SageMaker, CodePipeline)
Application buckets
CDK and CloudFormation assets
All buckets are ready to be imported into Terraform state management using the modern identity-based import method.

The entire process took about 2 minutes and successfully discovered 36 S3 buckets across my account, generating complete Terraform configuration ready for import.

Test 2 : TF version below 1.14.0

Write the necessary Terraform configuration to import my existing S3 buckets from the AWS account into Terraform management. The S3 buckets to import have a prefix of awscc-tool. 

> I'll help you import your existing S3 buckets with the "awscc-tool" prefix into Terraform management. Let me use the terraform-search-import skill to discover and
import these resources.
Reading file: /Users/manuchn/.kiro/skills/terraform-search-import/SKILL.md, all lines (using tool: read)
 ✓ Successfully read 8591 bytes from /Users/manuchn/.kiro/skills/terraform-search-import/SKILL.md
 - Completed in 0.0s

> Now I'll check what list resources are available for S3 buckets and create the necessary configuration to discover and import your S3 buckets with the "awscc-tool"
prefix.
I will run the following command: terraform version (using tool: shell)
Purpose: Check Terraform version to ensure it supports Terraform Search (requires >= 1.14)

Terraform v1.13.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v6.31.0

Your version of Terraform is out of date! The latest version
is 1.14.4. You can update by downloading from https://developer.hashicorp.com/terraform/install
 - Completed in 0.136s

> Since your Terraform version is 1.13.0 (below the required 1.14 for Terraform Search), I'll use the AWS CLI approach to discover your S3 buckets and generate the
import configuration manually.
Running aws cli command (using tool: aws):

Service name: s3api
Operation name: list-buckets
Parameters:
Region: us-east-1
Label: List all S3 buckets to find ones with awscc-tool prefix - Completed in 2.165s

> Perfect! I found 3 S3 buckets with the "awscc-tool" prefix:
1. awscc-tool-use-2fqr-assets
2. awscc-tool-use-ezct-artifacts
3. awscc-tool-use-ezct-assets

It correctly identified that the Terraform version was below 1.14.0 and the query command wouldn’t be supported in that case. I skipped the rest of the steps in the snippet above as kiro went on to add the config driven import blocks based on the names retrieved with the S3 API.

Effectiveness of what you wrote

As a non-native English speaker, I do struggle with finding the right balance in my writing- either over-explaniing or being too brief.So how do I make this effective ? This is an area I didn’t have any experience or idea about. The HashiCorp blog about the agent-skills mentioned tessl being used to evaluate and improve the skills they had written. So after the initial bit of testing with the copied skill, I started looking at tessl.

You have a few ways to do this evaluation. If you use the tessl npm package, you can publish the skill on to the registry which is auto evaluated. Or as in my case, run the evaluation locally since I am submitting this as an additional skill to the existing agent-skills bundle under HashiCorp namespace.

As you would have figured, I went the latter route.

Installation

As with any npm library, it is as simple as :

npm install -g @tessl/cli

Review

Let’s review using the tessl skill review command.


> tessl skill review  ~/.kiro/skills/terraform-search-import

Validation Checks

  ✔ skill_md_line_count - SKILL.md line count is 374 (<= 500)
  ✔ frontmatter_valid - YAML frontmatter is valid
  ✔ name_field - 'name' field is valid: 'terraform-search-import'
  ✔ description_field - 'description' field is valid (227 chars)
  ✔ description_voice - 'description' uses third person voice
  ✔ description_trigger_hint - Description includes an explicit trigger hint
  ✔ compatibility_field - 'compatibility' field is valid (104 chars)
  ✔ allowed_tools_field - 'allowed-tools' field not present (optional)
  ✔ metadata_version - 'metadata.version' is present: 0.1.0
  ✔ metadata_field - 'metadata' contains 2 entries
  ⚠ license_field - 'license' field is missing
  ✔ frontmatter_unknown_keys - No unknown frontmatter keys found
  ✔ body_present - SKILL.md body is present
  ✔ body_examples - Examples detected (code fence or 'Example' wording)
  ✔ body_output_format - Output/return/format terms detected
  ✔ body_steps - Step-by-step structure detected (ordered list)

Overall: PASSED (0 errors, 1 warnings)

Judge Evaluation

  Description: 100%
    specificity: 3/3 - Lists multiple specific concrete actions: 'Discover existing cloud resources', 'Terraform Search queries', 'bulk import them into Terraform management'. These are clear, actionable capabilities.
    trigger_term_quality: 3/3 - Includes natural keywords users would say: 'cloud resources', 'Terraform', 'import', 'unmanaged infrastructure', 'IaC', 'auditing', 'migrating'. Good coverage of terms for infrastructure-as-code workflows.
    completeness: 3/3 - Clearly answers both what ('Discover existing cloud resources using Terraform Search queries and bulk import them') AND when ('Use when bringing unmanaged infrastructure under Terraform control, auditing cloud resources, or migrating to IaC').
    distinctiveness_conflict_risk: 3/3 - Clear niche focused specifically on Terraform import/discovery workflows. The combination of 'Terraform Search', 'bulk import', and 'unmanaged infrastructure' creates distinct triggers unlikely to conflict with general Terraform or cloud skills.

    Assessment: This is a well-crafted skill description that excels across all dimensions. It provides specific actions (discover, search, bulk import), includes natural trigger terms users would actually say, explicitly states both capabilities and usage scenarios, and carves out a distinct niche within the Terraform ecosystem focused on import workflows.

  Content: 77%
    conciseness: 2/3 - The content is mostly efficient but includes some redundancy - the workflow is explained multiple times (overview section, then detailed sections, then complete example). Some sections like 'When to Use' list items that are somewhat obvious given the skill description.
    actionability: 3/3 - Excellent executable examples throughout - complete HCL configurations, bash commands, and a full end-to-end workflow example. Code is copy-paste ready with realistic values and clear syntax.
    workflow_clarity: 3/3 - Clear 5-step workflow with explicit sequence. Includes validation checkpoint ('Review and refine generated resource and import blocks') and the 'Best Practices' section emphasizes running plan before apply and backing up state. The complete example at the end reinforces the workflow.
    progressive_disclosure: 2/3 - Good structure with clear sections and a reference table, but the content is quite long (~300 lines) and could benefit from splitting detailed examples or the troubleshooting table into separate reference files. The external references at the top are appropriate but internal organization could be improved.

    Assessment: This is a well-structured, highly actionable skill with excellent executable examples and clear workflow guidance. The main weakness is length - the content could be more concise by removing some redundancy (workflow explained multiple times) and potentially splitting detailed reference material into separate files. The safety considerations around state management are appropriately emphasized.

    Suggestions:
      - Consider moving the detailed 'Query Examples' section to a separate EXAMPLES.md file and keeping only 1-2 examples inline
      - Remove the 'When to Use' section as it duplicates the skill description and is self-evident from the content

Average Score: 89%

✔ Skill evaluation completed successfully!

Not bad for an initial skill reference. Re-run after making some changes it suggested.

❯ tessl skill review  ~/.kiro/skills/terraform-search-import

Validation Checks

  ✔ skill_md_line_count - SKILL.md line count is 369 (<= 500)
  ✔ frontmatter_valid - YAML frontmatter is valid
  ✔ name_field - 'name' field is valid: 'terraform-search-import'
  ✔ description_field - 'description' field is valid (227 chars)
  ✔ description_voice - 'description' uses third person voice
  ✔ description_trigger_hint - Description includes an explicit trigger hint
  ✔ compatibility_field - 'compatibility' field is valid (104 chars)
  ✔ allowed_tools_field - 'allowed-tools' field not present (optional)
  ✔ metadata_version - 'metadata.version' is present: 0.1.0
  ✔ metadata_field - 'metadata' contains 2 entries
  ⚠ license_field - 'license' field is missing
  ✔ frontmatter_unknown_keys - No unknown frontmatter keys found
  ✔ body_present - SKILL.md body is present
  ✔ body_examples - Examples detected (code fence or 'Example' wording)
  ✔ body_output_format - Output/return/format terms detected
  ✔ body_steps - Step-by-step structure detected (ordered list)

Overall: PASSED (0 errors, 1 warnings)

Judge Evaluation

  Description: 100%
    specificity: 3/3 - Lists multiple specific concrete actions: 'Discover existing cloud resources', 'Terraform Search queries', 'bulk import them into Terraform management'. These are clear, actionable capabilities.
    trigger_term_quality: 3/3 - Includes natural keywords users would say: 'cloud resources', 'Terraform', 'import', 'unmanaged infrastructure', 'IaC', 'auditing', 'migrating'. Good coverage of terms for infrastructure-as-code workflows.
    completeness: 3/3 - Clearly answers both what ('Discover existing cloud resources using Terraform Search queries and bulk import them') AND when ('Use when bringing unmanaged infrastructure under Terraform control, auditing cloud resources, or migrating to IaC').
    distinctiveness_conflict_risk: 3/3 - Clear niche focused specifically on Terraform import/discovery workflows. The combination of 'Terraform Search', 'bulk import', and 'unmanaged infrastructure' creates distinct triggers unlikely to conflict with general Terraform or cloud skills.

    Assessment: This is a well-crafted skill description that excels across all dimensions. It provides specific actions (discover, search, bulk import), includes natural trigger terms users would actually say, explicitly states both capabilities and usage scenarios, and carves out a distinct niche within the Terraform ecosystem focused on import workflows.
  Content: 85%
    conciseness: 2/3 - The content is mostly efficient but includes some redundancy - the prerequisites section repeats the discovery script information, and some explanations (like the query output format) could be more compact. The post-generation cleanup section with before/after examples is valuable but slightly verbose.
    actionability: 3/3 - Excellent executable guidance throughout with complete, copy-paste ready HCL configurations and bash commands. The complete example at the end provides a full workflow with real commands. All code blocks are executable, not pseudocode.
    workflow_clarity: 3/3 - Clear decision tree upfront, explicit 5-step workflow, and the complete example shows the exact sequence with validation steps (terraform plan before apply). The 'Check Provider Support First' section with IMPORTANT callout establishes a critical validation checkpoint before proceeding.
    progressive_disclosure: 3/3 - Well-structured with clear sections progressing from prerequisites to basic to advanced usage. References external documentation appropriately (HashiCorp docs, MANUAL-IMPORT.md for unsupported resources). Content is appropriately organized without being monolithic or requiring deep navigation.

    Assessment: This is a strong, well-structured skill that provides actionable guidance for Terraform resource discovery and bulk import. The decision tree and explicit workflow steps make it easy to follow, and the code examples are complete and executable. Minor verbosity in some sections (prerequisites, cleanup examples) prevents a perfect conciseness score, but overall the content earns its place.

Average Score: 93%

✔ Skill evaluation completed successfully!

I left it there without over optimizing it based on some feedback notes I saw. Why do I feel this is important ? Working with an open spec, a tool like this can guide you on how you can make your skills effective. Keep your tests handy and re-run after changes you make. Sometimes it is just a change in order which makes a world of difference.

Learnings

  • If I have to start on a new agent skill tomorrow ( without some basline ) I would probably one of the below approaches:
    • tessl gives you tessl skill new workflow with the npm library to get you started.
    • Anthropic gives you a skill creator skill which you can possible use to start.
  • As with any of these instructions, you have to be clear about the versions your features are available in. Terraform search requires the user to have both terraform versions above 1.14.0 and using a provider version which has list resource support.
    • I had to explicitly mention to remove version constraints on the provider block to get the latest version always in the instructions.
  • Compatibility requirements : I had mixed results with this parameter which I am keeping for now and will revisit later to refine further if needed.

Conclusion

It was a fun experiment to see some of my existing instructions I had in my documents could be used to create a skill to import existing resources under Terraform management. The effort required in this case was minimal considering I had the instructions in various markdowns in my machine. tessl review helps to know what else could be optimized to efficiently manage what the model sees when it is made aware of the available skills. What’s next ? I think the ephemerality and actions based approaches could use some nudge among practitioners.

Note : As of today, my changes are on an active PR being reviewed.

References: