<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Cognizant Transmutation</title>
	<atom:link href="https://www.ibd.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ibd.com</link>
	<description>Internet Bandwidth Development: Composting the Internet for over Two Decades</description>
	<lastBuildDate>Thu, 26 Mar 2026 22:50:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.8</generator>

 
<atom:link rel="hub" href="https://pubsubhubbub.appspot.com"/>
<atom:link rel="hub" href="https://pubsubhubbub.superfeedr.com"/>
<atom:link rel="hub" href="https://websubhub.com/hub"/>
<atom:link rel="self" href="https://www.ibd.com/feed/"/>
<site xmlns="com-wordpress:feed-additions:1">156814061</site>	<item>
		<title>Connect Quicksight to RDS in a private VPC</title>
		<link>https://www.ibd.com/howto/connect-quicksight-to-rds-in-a-vpc-2/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Fri, 28 Jan 2022 19:27:46 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<guid isPermaLink="false">https://www.ibd.com/howto/connect-quicksight-to-rds-in-a-vpc-2/</guid>

					<description><![CDATA[<p>How to setup a Quicksight VPC Connection to Aurora RDS in a private VPC</p>
<p>The post <a href="https://www.ibd.com/howto/connect-quicksight-to-rds-in-a-vpc-2/">Connect Quicksight to RDS in a private VPC</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>There are cases where you want to connect <a href="https://aws.amazon.com/quicksight/">AWS Quicksight</a> to pull data from an RDS Database in one of your Private VPCs. Its one of those things that you don’t do often and its just funky enough and different enough from most AWS services that I have had to relearn how to do it each time. So here’s what I’ve learnt for posterity.</p>
<p>Quicksight can automatically connect to databases that can be accessed via a public IP. If your DB is publicly accessible to the Internet (with Security Group filtering of course), then you can pretty much ignore this article.</p>
<p>If you happen to have the weird case where your DB does have a public IP address but it is not actually accessible to the public Internet for policy, technical or historical reasons, then read on.</p>
<h2>Quicksight <code>VPC Connection</code> Requirements</h2>
<p>Quicksight has the option of creating a connection between you instance of Quicksight and one of your VPCs. It does that by injecting a Network Interface into a subnet you specify from the target VPC.</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-quicksight-vpc-network-interace.png?ssl=1" alt="Networking of Quicksight and VPC"/></p>
<p>You only have to supply the</p>
<ul>
<li><code>VPC ID</code> that your target DB is in.</li>
<li><code>Subnet</code> that is routable to the subnet your target DB is in</li>
<li><code>Security Group</code> dedicated to the Quicksight connection that will allow all TCP taffic to the target DB</li>
<li>Optionally a <code>DNS Inbound Endpoint</code></li>
</ul>
<p>We&#8217;re going to assume that the target DB and its VPC already exists.</p>
<p>You can use an existing <code>Subnet</code> as long as its in the same VPC and is routable to the subnets used by the target DB. By default subnets in a VPC can route to any other subnet in the VPC, but you should double check.</p>
<p>When you create the <code>VPC Connection</code> in the Quicksight management console, it will automatically create a <code>Network Interface</code> on the specified <code>Subnet</code> and will be associated with the <code>Security Group</code> specified.</p>
<p>Note that the <code>Security Group</code> associated with this new Quicksight <code>Network Interface</code> will be stateless. Any response packets coming back from a Quicksight request will have randomly allocated port numbers. Normally Security groups are stateful and handle this for you. But in the case of the Quicksight Network Interface you have to explicitly enable that any port is allowed for inbound.</p>
<p>The optional <code>DNS Inbound Endpoint</code> allows you to tell Quicksight to use the private DNS Resolver for your VPC instead of querying just the Public DNS zones. This is what is needed if your target DB has a Public IP address. Without this setting this Quicksight will get the Public IP address when it queries the <code>Endpoint name</code> of your DB. You will be scratching your head for days wondering why the connection is not working.</p>
<p>If you do use <code>DNS Inbound Endpoint</code> option, you will have to set it up in <code>Route53</code>.</p>
<p>Detailed instructions for all of this are described below.</p>
<p>A <code>VPC Connection</code> will allow Quicksight to connect to any of the following in your VPC:</p>
<ul>
<li>Amazon OpenSearch Service</li>
<li>Amazon Redshift</li>
<li>Amazon Relational Database Service</li>
<li>Amazon Aurora</li>
<li>MariaDB</li>
<li>Microsoft SQL Server</li>
<li>MySQL</li>
<li>Oracle</li>
<li>PostgreSQL</li>
<li>Presto</li>
<li>Snowflake</li>
</ul>
<p>You can reuse a <code>VPC Connection</code> for any Datasource in your Quicksight account in a region.</p>
<h2>Subnet Info</h2>
<h3>Get VPC Info</h3>
<p>We&#8217;ll need the <code>VPC ID</code> and the <code>CIDR Block</code> associated</p>
<p>You can look at your RDS Configuration to see what VPC it is in.<br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-rds-configuration-vpc-info.png?ssl=1" alt="RDS Configuraiton showing VPC"/></p>
<p>In this example:</p>
<ul>
<li><code>VPC ID</code> ends in <code>2aed</code></li>
<li><code>CIDR Block</code>: 10.0.0.0/16<br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-vpc-configuration.png?ssl=1" alt="VPC Console view of the VPC of interest"/></li>
</ul>
<h3>Pick a Subnet for the Quicksight Network Interface to Use</h3>
<p>The criteria are:</p>
<ul>
<li>In the same VPC as the target DB</li>
<li>Is a private subnet</li>
<li>In the same Availability Zone as at least one of the subnets associated with the target DB</li>
<li>Routable to that subnet in the target DB
<ul>
<li>Has a route table that routes to the VPC CIDR Block</li>
<li>And the Target DB Subnets also can route to the VPC CIDR Block</li>
</ul>
</li>
<li>Doesnt have an ACL that would block acces to/from the target DB
<ul>
<li>This is the usual case</li>
</ul>
</li>
</ul>
<h4>Find the subnets used on the target DB</h4>
<p>In this example our target DB is an Aurora Postgres cluster. Looking at the RDS Console we can find the subnets its usings</p>
<p><strong>Click on one of the subnets to view the subnet info</strong><br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-rds-configuration.png?ssl=1" alt="RDS Console with subnet info"/></p>
<p><strong>See what <code>Availability Zone</code> its in (us-east-1b in this example)</strong><br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-rds-subnet-details-availability-zone.png?ssl=1" alt="RDS Subnet info"/></p>
<p><strong>Confirm it routes to the VPC CIDR Block (10.0.0.0/16 in this example)</strong><br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-rds-subnet-details.png?ssl=1" alt="RDS Console showing Availability Zone"/></p>
<ul>
<li>Go to the subnet view in the VPC Console.</li>
<li>Find an existing subnet that also routes to the same VPC CIDR Block (or overlapping subset with the DB subnet and its on the same <code>Availability Zone</code>)
<ul>
<li>You could also create a new subnet for this as long as it meets the same criteria</li>
</ul>
</li>
<li>In our example its the subnet that ends with <code>90dc</code></li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-quicksight-subnet.png?ssl=1" alt="Existing Subnet suitable for Quicksight"/></p>
<h2>Security Group</h2>
<p>Create a new <code>Security Group</code> dedicated to the Quicksight Network Interface. You don&#8217;t <em>have</em> to create one specific to this, but it will make management easier than trying to mix it in with your existing Security Groups.</p>
<p>We&#8217;ll call it <code>Amazon-QuickSight-access</code>. Nothing magic about the name though, whatever fits into your naming scheme.</p>
<h3>Inbound Rules</h3>
<p>Set the <code>Inbound Rules</code> to allow trafic on all TCP ports. As mentioned earlier, this is because this will be a stateless security group and all response packets will have random inbound ports.</p>
<h3>Outbound Rules</h3>
<p>The <code>Outbound Rules</code> should limit the destinations to just your target DB. The easiest way is to set the destination to be the Security Group set in your RDS Database.</p>
<p>You should also limit the outbound ports to be ones appropriate for your target DB, such as port 5432 for Postgres.</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-security-group-outbound-rules.png?ssl=1" alt="Outbound Rules with security group destination"/></p>
<p>We ran into a problem where for historical reasons, there were some existing inbound rules in the Target DB that prevented us from using the Target DB as the destination security group, so we used a CIDR range that covered the Target DB range of addresses. This should be an unusual situation and you can probably ignore it.<br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-security-group-outbound-rules-cidr.png?ssl=1" alt="Outbound Rules wiht security group destination"/></p>
<h2>DNS Resolver Endpoints (optional)</h2>
<p>You only need to fill this in for cases where the DNS lookup of your Target DB <code>Endpoint</code> would be incorrect against Public DNS. The usual use case for this is when you have a somewhat complicated VPC Peering setup where your Target DB is on the other side of a VPC Peering setup. In that case, only the DNS Resolver in your private VPC may know the proper resolution of the Target DB Endpoint.</p>
<p>In our case, we had the unusal situation that our Target DB had a public IP address, so when Quicksight would do a DNS Query on the Target DB <code>Endpoint</code> name, it would get the Public IP address which was not valid for the <code>VPC Connection</code>. The workaround is for Quicksight to use the local VPC DNS Resolver. And thus our need to setup the <code>DNS Resovler Endpoints</code></p>
<p>It took a while to figure out this was why we could never get the <code>VPC Connection</code> to work until we set this up. The diagnostics of the <code>VPC Connection</code> Validation check does not differentiate between Networking, DNS, or username/password problems. An issue in anyone of those can make the connection validation fail.</p>
<h3>Route53 Resolver Inbound Endpoints</h3>
<h4>Create a Security Group for the Resolver</h4>
<p>The resolver needs to have a security group for itself to allow the DNS requests to get to it.</p>
<h5>Inbound Rules</h5>
<ul>
<li>Create a new Security Group and call it something like <code>quicksight-route53-resolver</code> or whatever fits your nameing scheme.</li>
<li>Set the <code>Inbound Rules</code> to allow for DNS UDP and DNS TCP from all sources on the VPC CIDR Block</li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-security-group-dns-resolver-inbound.png?ssl=1" alt="Resolver Security Group Inbound Rules"/></p>
<h5>Outbound Rules</h5>
<p>Can leave the default outbound to all rule</p>
<h4>Setup the Route53 Endpoint Resolver</h4>
<p>You will need to go to the Route53 Console and select <code>Resolver-&gt;Inbound endpoints</code> and click on the <code>Create Inbound Endpoint</code> button.</p>
<ul>
<li>Set the <code>Endpoint name</code>
<ul>
<li>Something that fits your naming scheme</li>
<li>Our example is <code>quicksight-prod</code></li>
</ul>
</li>
<li>Set the VPC ID to be the same as your <code>VPC-ID</code> used for your Target DB</li>
<li>Set the Security Group to the Security Group you created</li>
<li>Set the Availability Zone / Subnet for two IP Addresses for the resolver
<ul>
<li>Could be any that are routable to the Subnet that is assigned to the <code>VPC Connection</code>.</li>
<li>Should be a private subnet</li>
<li>Might as well make one of them the same Subnet used by the <code>VPC Connection</code></li>
<li>Check the option <code>Use an IP address that is selected automatically</code> for both</li>
<li>Click <code>Submit</code> when done</li>
</ul>
</li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-route53-create-inbound-endpoint.png?ssl=1" alt="Route53 Create Resolver Inbound Endpoint"/></p>
<p>You will then end up with an <code>Inbound Endpoint</code> that will have been assigned two IP addresses. These addresses will be needed to supply to the <code>VPC Connection</code> and be used to update the Quicksight Security Group.</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-route53-quicksight-inbound-resolver.png?ssl=1" alt="Route53 Quicksight Inbound Resolver"/></p>
<p>In this example the two IP Addresses are</p>
<ul>
<li>10.0.100.120</li>
<li>10.0.101.80</li>
</ul>
<h4>Update the Quicksight Security Group for DNS</h4>
<p>If you are using the DNS Resolver Inbound Endpoints feature, you will also have to update the <code>Outbound Rules</code> of the Security Group we created earlier for the <code>Quicksight Network Interface</code>. This is to enable Quicksight to be able to access the DNS Resolver as well as the Target DB.</p>
<p>To do this we will add DNS UDP and DNS TCP to the <code>Output Rules</code> of the <code>Amazon-QuickSight-access</code> Security Group for each of the two IP Addresses from the <code>Inbound Resolver</code> we just created. Note that you need to have the CIDR suffix <code>/32</code> at the end when entering them into the Security Group editor.</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-security-group-quicksight-with-dns-resolver-ips.png?ssl=1" alt="Quicksight Security Group with DNS Resolver IPs"/></p>
<h2>Create the Actual VPC Connection</h2>
<p>Now we have everything we need to setup the actual <code>VPC Connection</code> in the Quicksight management console.</p>
<p>You will of course need to have proper permissions to access and manage Quicksight in your account. That is beyond the scope of this article. We&#8217;re going to assume you have all that already.</p>
<ul>
<li>Enter the Quicksight Console, click on your username on the topr right of the page and select <code>Manage Quicksight</code></li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-quicksight-select-manage.png?ssl=1" alt="Home page select Manage Quicksight"/></p>
<ul>
<li>Then select <code>Manage VPC Connections</code> in the left hand Navbar</li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-qucicksight-select-manage-vpc-connections.png?ssl=1" alt="Select Manage VPC Connection from Navbar"/></p>
<ul>
<li>Click on <code>Add VPC Connection</code> to create the new connection</li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-quicksight-add-vpc-connection.png?ssl=1" alt="Add VPC Connection"/></p>
<p>Fill in the form with the info we found or created earlier:</p>
<ul>
<li><code>VPC Connection Name</code>
<ul>
<li>Appropriate name of the connection based on your naming conventions</li>
<li>Our example: <code>my-aurora-db</code></li>
</ul>
</li>
<li><code>VPC ID</code>: The <code>VPC ID</code> we have been using earlier
<ul>
<li>Our example ends with <code>2aed</code></li>
</ul>
</li>
<li><code>Subnet ID</code>: The Subnet we chose for the Quicksight <code>Network Interface</code>
<ul>
<li>In our example it ends with <code>90dc</code></li>
</ul>
</li>
<li><code>Security Group ID</code>: The Security Group we created for Quicksight
<ul>
<li>Our example: <code>Amazon-QuickSight-access</code> (ended in <code>16e8</code>)</li>
</ul>
</li>
<li><code>DNS Resolver Endpoints</code>: The IP addresses from the DNS Resolver <code>Inbound Endpoints</code>
<ul>
<li>Our example: <code>10.0.100.120</code> and <code>10.0.101.80</code></li>
</ul>
</li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-vpc-connection-form.png?ssl=1" alt="VPC Connection Form"/></p>
<h2>Create a Dataset using the VPC Connection</h2>
<p>Now that the <code>VPC Connection</code> has been setup, we can use it to create a Dataset from the Target DB.</p>
<ul>
<li>Click on the <code>Quicksight</code> logo on the top left of the screen to get back to the Quicksight home page.</li>
<li>Click on <code>Datasets</code> at the bottom of the left Navbar</li>
<li>Click on <code>New dataset</code> on the top right of the page.</li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-getting-to-new-dataset.png?ssl=1" alt="Getting to the Dataset create page"/></p>
<ul>
<li>Click on <code>Aurora</code> (or other source, but we&#8217;re not going to show other sources in this article)</li>
</ul>
<h3>Fill in the <code>New Aurora data source</code> form</h3>
<ul>
<li>
<code>Data source name</code>: Our example is <code>my-data-source</code></li>
<li>
<code>Connection type</code>: Select the VPC connection we created</p>
<ul>
<li><code>my-aurora-db</code><br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-select-vpc-connection.png?ssl=1" alt="Select VPC Connection"/></li>
</ul>
</li>
<li>
<code>Database connector</code>: <code>PostgreSQL</code></li>
<li>
<code>Database server</code>: The <code>Endpoint</code> of your Aurora DB</p>
<ul>
<li>This is the fully qualified DNS name of your DB endpoint</li>
<li>You can find it in the <code>Connectivity &amp; security</code> tab on the DB&#8217;s RDS Console page</li>
<li>You probably want to use a reader endpoint</li>
</ul>
</li>
<li>
<code>Port</code>: The proper port for your Target DB</p>
<ul>
<li>Postgres default is <code>5432</code></li>
</ul>
</li>
<li>
<code>Database Name</code>: The name of the database within the RDS of interest</p>
<ul>
<li>Same name you would use in a DB connection or in psql to connect to your working DB&#8217;s</li>
</ul>
</li>
<li>
<code>Username</code>: The db username needed to connect</li>
<li>
<code>Password</code>: The db password</li>
</ul>
<h3>Click on <code>Validate Connection</code></h3>
<p>It should turn to <code>Validated</code> with a checkmark if all went well. Should happen within a few seconds.</p>
<p>At this point you can now click on the <code>Create data source</code> button and do the normal Quicksight data source stuff. That is all independent of the VPC Connection and is not part of this article.</p>
<h2>If the VPC Connection fails to Validate</h2>
<p>If the Validate failed you are going to have to check several things. There will be an error message. You can click on the <code>details</code> link, but its probably not going to be helpful.</p>
<p>The error diagnosicts for the VPC Connection rarely gives you any more info other than it could not connect to the DB.</p>
<p>You need to determine if its because of:</p>
<ul>
<li>The routing from the Quicksight Network Interface to the Target DB</li>
<li>The Security Group settings</li>
<li>Basic error in the regular connection parameters (<code>Database name</code>, <code>Username</code>, <code>Password</code>)</li>
<li>The <code>Database Server</code> is the correct value and Quicksight DNS query is getting the right value (private IPs not public or nothing at all)</li>
</ul>
<h3>Check for basic connectivity</h3>
<p>You can check the basic connectivity (routing and security groups) is working using the <code>Reachability Analyzer</code> in the VPC Console. Unfortunately the analyzer has a limited set of elements that can be specified as a Source and Destination. The only one that applies to the Quicksight VPC Connection as a source and Aurora RDS as a Destination are <code>Network Interfaces</code>. So we&#8217;re going to need to find the IDs of those two Network Interfaces.</p>
<h4>Find the ID of the VPC Connection Network Interface</h4>
<p>You will need to know the <code>Network Interface</code> ID of the interface created for the VPC Connection. To figure that out go to the EC2 Console page and click on <code>Network Interfaces</code> under <code>Network &amp; Security</code> in the Navbar on the left</p>
<p>Then search for the name you used for the Quicksight connection. Our example was <code>my-aurora-db</code> It will be part of the description of the <code>Network Interface</code> associated with that connection. In our example it starts with <code>eni-0b9e</code></p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-search-for-network-interface.png?ssl=1" alt="Search for Network Interface"/></p>
<h4>Find the ID of the Target DB Network Interface</h4>
<p>You will need to know any of the <code>Network Interface</code> IDs of the Target DB. There can be a few as there may be one per Availability Zone. It doesn&#8217;t matter which one you choose.</p>
<ul>
<li>Still on the EC2 Console page <code>Network Interfaces</code> page, search for the Target DB&#8217;s Security Group name.
<ul>
<li>You can find the Target DB Security Group name on the RDS Console page for your Target DB under the <code>Connectivty &amp; Security</code> tab labeled <code>VPC security groups</code></li>
</ul>
</li>
<li>Select any one of them.
<ul>
<li>Our example starts with: <code>eni-050d</code></li>
</ul>
</li>
</ul>
<h3>Run the Reachability Analyzer</h3>
<p>Go to the VPC Console and click on the <code>Create and analyze path</code> button on the top right of the page</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-reachability-analyzer.png?ssl=1" alt="Rechability Analyzer"/></p>
<ul>
<li>Give it a name</li>
<li>Select <code>Network Interfaces</code> for the <code>Source type</code> and <code>Destination type</code></li>
<li>Specify the <code>Network Interface ID</code> of the Quicksight <code>Network Interface</code> we found
<ul>
<li>Starts with <code>eni-0b9e</code> in our example</li>
</ul>
</li>
<li>Specify the <code>Network Interface ID</code> of the Target DB <code>Network Interface</code> we found
<ul>
<li>Starts with <code>eni-050d</code> in our example</li>
</ul>
</li>
<li>Specify <code>5432</code> for the <code>Destination port</code>
<ul>
<li>Or whatever port you set for your Target DB if not Postgres</li>
</ul>
</li>
<li>Protcol is <code>TCP</code></li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-create-and-analyze-path.png?ssl=1" alt="Start Create and Analyze"/></p>
<ul>
<li>Click on <code>Create and analyze path</code></li>
</ul>
<p>If it all works you should see:</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/connect-quicksight-analyze-success.png?ssl=1" alt="Analyze Success"/></p>
<p>If that works, you configured the DNS Endpoint Resolver, but your VPC Connection / Dataset creation still doesn&#8217;t work, you may want to repeat the <code>Reachability Analyzer</code> test for the DNS TCP and UDP ports in addition to the Postgres Port to double check for the DNS passing properly between the resolver and Quicksight.</p>
<h2>Conclusion</h2>
<p>If the <code>Reachability Analyzer</code> said connectivity is ok and it still doesn&#8217;t work, then its probable that one of the other basic connection parameters is wrong, or there is something wrong with the <code>Endpoint</code> name. If you hadn&#8217;t tried setting up the DNS Endpoint Resolver option, you can try that to see if there was a problem with how Quicksight was resolving the DNS for your <code>Endpoint</code>. That was what started this whole journey for me.</p>
<p>Otherwise, hopefully this did work for you and you can now happly view your Target DB in Quicksight!</p><p>The post <a href="https://www.ibd.com/howto/connect-quicksight-to-rds-in-a-vpc-2/">Connect Quicksight to RDS in a private VPC</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2073</post-id>	</item>
		<item>
		<title>Use Amplify Studio Figma Connector with Clojurescript</title>
		<link>https://www.ibd.com/howto/amplify-studio-cljs-tutorial/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 10 Jan 2022 05:39:56 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<guid isPermaLink="false">https://www.ibd.com/howto/amplify-studio-cljs-tutorial/</guid>

					<description><![CDATA[<p>How to use Clojurescript with Amplify React UI Components generated from Figma</p>
<p>The post <a href="https://www.ibd.com/howto/amplify-studio-cljs-tutorial/">Use Amplify Studio Figma Connector with Clojurescript</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h1>Amplify Studio / Figma / Clojurescript / Reagent Tutorial</h1>
<p>Implements the AWS Tutorial <a class="keychainify-checked" href="https://welearncode.com/studio-vacation-site/">Build a Vacation Rental Site with Amplify Studio</a> but instead of being Javascript based, uses Clojurescript for the project implementation. It does incorporate the Javascript output of Amplify Studio but all code to use it is in Clojurescript.</p>
<h3>Tooling Used in the project:</h3>
<ul>
<li><a class="keychainify-checked" href="https://github.com/AutoScreencast/create-reagent-app">Create Reagent App</a> to create the project scaffold</li>
<li><a class="keychainify-checked" href="https://clojurescript.org">Clojurescript</a> (The whole point of this article!)</li>
<li><a class="keychainify-checked" href="http://shadow-cljs.org/">Shadow-CLJS</a> as the build tool / compiler</li>
<li><a class="keychainify-checked" href="https://webpack.js.org">Webpack</a> Key to preping JSX and JS Files from Amplify Studio and UI Components to be used with shadlow-cljs transpiled clojurescript.</li>
<li><a class="keychainify-checked" href="https://babeljs.io">Babel</a> Used by webpack to convert JSX to JS</li>
<li><a class="keychainify-checked" href="https://github.com/reagent-project/reagent">Reagent</a> (CLJS wrapper around <a class="keychainify-checked" href="https://reactjs.org/">React</a>) for building your user interface</li>
<li><a class="keychainify-checked" href="https://aws.amazon.com/amplify/studio/">Amplify Studio</a> and all the related <a class="keychainify-checked" href="https://aws.amazon.com/amplify/">AWS Amplify tooling</a></li>
<li><a class="keychainify-checked" href="https://www.figma.com/community/file/1047600760128127424">Figma AWS Amplify UI Kit</a></li>
</ul>
<h2>Prerequisites</h2>
<h3>Setup an Amplify Studio Project</h3>
<p>All the initial setup of the Amplify Studio Project on AWS and the associated Figma project is already described in the first part of the excellent <a class="keychainify-checked" href="https://welearncode.com/studio-vacation-site/">Build a Vacation Rental Site with Amplify Studio</a> so will not repeat it here.</p>
<p>That first part of the article will have you do all the following in the appropriate Web Consoles and services AWS and Figma). You won&#8217;t be doing any CLI commands on your local dev computer:</p>
<ul>
<li>Setup an Amplify Studio project via the Amplify Sandbox</li>
<li>Create a basic data model in Amplify Studio</li>
<li>Deploy the start of the Amplify project to AWS</li>
<li>Create some sample data</li>
<li>Set up a Figma project using the Amplify UI Components and shows you how to modify it</li>
<li>Import the modified Figma project into the Amplify project</li>
<li>Link the data model and the UI Component in Amplify Studio</li>
<li>Create a collection view using Amplify Studio</li>
</ul>
<p><strong>Start by following the instructions from the original article, <a class="keychainify-checked" href="https://welearncode.com/studio-vacation-site/">Build a Vacation Rental Site with Amplify Studio</a>, up thru to the section: <code>Pull to Studio</code></strong></p>
<p>Once you have completed that, come back to here and follow the rest of this post at this point.</p>
<h2>Creating an Amplify Studio App with Clojurescript</h2>
<p>This is the actual instructions on how to create your Amplify Studio app in Clojurescript instead of Javascript. It replaces the remainder all the content after <code>Pull to Studio</code> from the original article <a class="keychainify-checked" href="https://welearncode.com/studio-vacation-site/">Build a Vacation Rental Site with Amplify Studio</a>.</p>
<h3>Create a git repo with shadow-cljs / reagent scaffolding</h3>
<p>Instead of using <code>create-react-app</code> that would have created a Javascript/React app, we’re going to use <a class="keychainify-checked" href="https://www.npmjs.com/package/create-reagent-app">create-reagent-app</a> to create the scaffolding of a shadow-cljs / reagent / react app repo.</p>
<p>In this tutorial, we will make this a git repo and snapshot the state at every stage so that if you make a mistake you can go back to an earlier step.</p>
<pre><code class="language-bash">npx create-reagent-app  amplifystudio-cljs-tutorial
cd amplifystudio-cljs-tutorial
git init
git add -A
git commit -m "Initial Commit after create-reagent-app"
npm-install</code></pre>
<h3>Add webpack and related dependencies</h3>
<p>Shadow-cljs can not directly consume JSX files that are the output of the Figma plugin and it needs some help to incorporate the AWS UI Components files that Amplify Studio injects into the project.</p>
<p>The use of Babel to prepare JSX files for Shadow-cljs is based on info from <a class="keychainify-checked" href="https://shadow-cljs.github.io/docs/UsersGuide.html#_javascript_dialects">Shadow CLJS User’s Guide &#8211; JavaScript Dialects</a>. This tutorial moves the babel management into webpack configuration as described later on.</p>
<p>The following dependencies are needed primarily to install webpack and its dependencies.</p>
<p><code>html-webpack-plugin</code> and <code>html-beautifier-webpack-plugin</code> are used to inject the proper JS include for the output of webpack into the index.html.</p>
<pre><code>npm i -D @babel/cli @babel/core @babel/preset-react @babel/preset-env babel-loader html-webpack-plugin html-beautifier-webpack-plugin process webpack webpack-cli</code></pre>
<p>Then update any dependencies to the latest versions<br />
If you don’t already have it, install <a class="keychainify-checked" href="https://www.npmjs.com/package/npm-check-updates">npm-check-updates</a></p>
<pre><code>npm install -g npm-check-updates</code></pre>
<p>And then run it to update any dependencies to the latest versions ignoring specified versions in the package.json.</p>
<p>I like to start projects with the latest versions of everything. But you could just make sure <code>shadow-cljs</code> is the latest version, best to stay latest with that.</p>
<p>If you run it without the <code>-u</code> it will just show you what it would update and you could manually update the ones you care about.</p>
<pre><code class="language-bash">ncu -u
npm install</code></pre>
<h3>Update your local git repo</h3>
<pre><code class="language-bash">git add -A
git commit -m "Snapshot after adding webpack dependencies"</code></pre>
<p>If you want, you could push it to your own remote Github or other repository</p>
<h3>Add AWS Dependencies</h3>
<ul>
<li>AWS Account
<ul>
<li>If you don’t already have an AWS account, you’ll need to create one in order to follow the steps outlined in this tutorial. <a class="keychainify-checked" href="https://portal.aws.amazon.com/billing/signup?redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation#/start">Create an AWS Account</a></li>
</ul>
</li>
<li>Amplify CLI<br />
If you don’t already have the Amplify CLI installed you can install it with</li>
</ul>
<pre><code>npm install -g @aws-amplify/cli</code></pre>
<ul>
<li>Configure Account / IAM / CLI to work with Amplify<br />
If you already have an AWS account you want to use and you have things setup in your workstation / Terminal to use AWS CLI via profiles in ~/.aws/credentials, you can just set your profile in your terminal for the profile to use</li>
</ul>
<pre><code>export AWS_PROFILE=&lt;your profile&gt;</code></pre>
<p>and you don’t need to do <code>amplify configure</code>.</p>
<p>If you haven’t set up aws amplify on your local dev machine before, follow the instructions at <a class="keychainify-checked" href="https://docs.amplify.aws/cli/start/install/#configure-the-amplify-cli">Configure the Amplify CLI</a></p>
<h3>Install the aws-amplify libraries in your project</h3>
<p>Still at the top of the <code>amplifystudio-cljs-tutorial</code> repo, install the libraries</p>
<pre><code>npm i aws-amplify @aws-amplify/ui-react</code></pre>
<p>You might want to commit the changes to git just as a snapshot in case the next step messes anything up.</p>
<pre><code>git commit -a -m "After adding amplify deps"</code></pre>
<h3>Sync repo with Amplify project</h3>
<p>Using the amplify CLI, pull the project info and ui-components into your repo.</p>
<p>You’ll get the command to do this from your Amplify Apps page that was created earlier.<br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/aws-amplify-console2.png?ssl=1" alt="" /></p>
<p>If you are using an AWS account via IAM, you should log in to your AWS Console on your default browser. The following command is going to open up your default browser to authenticate to AWS.</p>
<p>If you are not using AWS IAM for auth, but are using the Amplify Console that has its own username/password style login, you don’t need to do anything in advance.</p>
<p><strong>DON’T TYPE THIS EXACT LINE</strong><br />
Use the line from your environment as it has the appID for your application<br />
The following line is just an example</p>
<pre><code>amplify pull --appId dgt42342sdv765la --envName staging</code></pre>
<p>This will eventually open a browser page to authenticate the process. As mentioned earlier, if you are using IAM for access, its easiest if you logged into the AWS Console with your browser first. If you forget to do this, you can still login now, and copy and past the link shown in the output of the CLI command and it will retry authenticating.</p>
<p>If you are using the Amplify Studio username/password, you will get that dialog on the browser and you can fill it in and click Yes</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/amplify-studio5.png?ssl=1" alt="" /></p>
<p>It will then prompt you for a bunch of things to set up your amplify project in this repo</p>
<pre><code>Opening link: https://us-west-2.admin.amplifyapp.com/admin/dgt42342sdv765la/staging/verify/
&#x2714; Successfully received Amplify Studio tokens.
Amplify AppID found: dgtkqevv765la. Amplify App name is: rental-cljs
Backend environment staging found in Amplify Console app: rental-cljs
? Choose your default editor:
  Android Studio
  Xcode (Mac OS only)
  Atom Editor
  Sublime Text
  IntelliJ IDEA
  Vim (via Terminal, Mac OS only)
❯ Emacs (via Terminal, Mac OS only)
(Move up and down to reveal more choices)</code></pre>
<p>Of course the only choice that makes sense is Emacs <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f913.png" alt="🤓" class="wp-smiley" style="height: 1em; max-height: 1em;" /><br />
(Note even though it says via terminal, it works fine with GUI Emacs)</p>
<pre><code>? Choose the type of app that you're building (Use arrow keys)
  android
  flutter
  ios
❯ javascript</code></pre>
<p>Keep javascript</p>
<pre><code>? What javascript framework are you using (Use arrow keys)
  angular
  ember
  ionic
❯ react
  react-native
  vue
  none</code></pre>
<p>Keep react</p>
<pre><code>? Source Directory Path:  src/amplify
? Distribution Directory Path: public
? Build Command:  npm run-script build
? Start Command: npm run-script start</code></pre>
<p>Enter <code>src/amplify</code>for <code>Source Directory Path</code><br />
Enter <code>public</code> for <code>Distribution Directory Path</code><br />
This build puts everything in <code>public</code> but other scaffolding or cljs projects may use some other path. It should be the same as the directory above <code>js</code> in the <code>output-dir</code> parameter in <code>shadow-cljs.edn</code></p>
<p>You can keep the defaults for <code>Build Command</code> and <code>Start Command</code></p>
<p>The rest of the config inputs and outputs:</p>
<pre><code>&#x2714; Synced UI components.
GraphQL schema compiled successfully.

Edit your schema at /Users/rberger/work/aws/amplifystudio-cljs-tutorial/amplify/backend/api/rentalcljs/schema.graphql or place .graphql files in a directory at /Users/rberger/work/aws/amplifystudio-cljs-tutorial/amplify/backend/api/rentalcljs/schema
Successfully generated models. Generated models can be found in /Users/rberger/work/aws/amplifystudio-cljs-tutorial/src/main
? Do you plan on modifying this backend? (Y/n) Y</code></pre>
<p>Say <code>Y</code> for <code>Do you plan on modifying this backend?</code></p>
<p>You might want to checkpoint your git repo again after this.</p>
<pre><code class="language-bash">git add -A
git commit -m "After pulling Amplify Studio project"</code></pre>
<p>You can make sure the basic reagent setup is still working by doing:</p>
<pre><code>npm start</code></pre>
<p>The first time you run this, it will take a while to download all the Clojurescript / Clojure dependencies.</p>
<p>And see that the app is running at <code>http://localhost:3000</code><br />
You will just see <code>Create Reagent App</code> on the page as a header.</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/initial-reagent-app-page.png?ssl=1" alt="Initial Create Reagent App success page" /></p>
<h2>Update to support mixing webpack with shadow-cljs</h2>
<p>Based on David Vujic’s work <a class="keychainify-checked" href="https://davidvujic.blogspot.com/2021/08/hey-webpack-hey-clojurescript.html">Agile &amp; Coding: Hey Webpack, Hey ClojureScript</a> we’re going to add mechanisms to build the javascript code using webpack and the clojurescript code with shadow-cljs. This is necessary when using more recent versions of the AWS Amplify libraries.</p>
<h3>Make sure Shadow-cljs dependencies are up to date</h3>
<p>In <code>shadow-cljs.edn</code> make sure that the dependencies are up to date (you can check for the latest versions at <a class="keychainify-checked" href="https://clojars.org/">Clojars</a>)</p>
<pre><code class="language-clojure"> :dependencies
 [[reagent            "1.1.0"]
  [binaryage/devtools "1.0.4"]]</code></pre>
<h3>Shadow-cljs js-options</h3>
<p>Add the following lines to shadow-cljs.edn between the <code>:asset-path</code> and <code>:modules</code> stanzas in the <code>:app</code> section as per <a class="keychainify-checked" href="https://github.com/thheller">Thomas Heller</a>&#8216;s article<br />
<a class="keychainify-checked" href="https://code.thheller.com/blog/shadow-cljs/2020/05/08/how-about-webpack-now.html">How about webpack now?</a></p>
<pre><code class="language-clojure">   :js-options {:js-provider    :external
                :external-index "target/index.js"}</code></pre>
<h3>Make a template from index.html</h3>
<p>Webpack will be used to update index.html with the proper script include that points to the webpack bundle.</p>
<h4>Move <code>public/index.html</code> to <code>public/index.html.tmpl</code></h4>
<pre><code>mv public/index.html public/index.html.tmpl</code></pre>
<h4>Edit <code>public/index.html.tmpl</code></h4>
<ul>
<li>Add <code>defer</code> to the main script tag</li>
</ul>
<p>Change:</p>
<pre><code>&lt;script src="/js/main.js"&gt;&lt;/script&gt;</code></pre>
<p>To:</p>
<pre><code>&lt;script defer src="/js/main.js"&gt;&lt;/script&gt;</code></pre>
<h4>Add in a sytlesheet for the fonts</h4>
<ul>
<li>Add the following line after the other <code>link</code> tags in <code>&lt;head&gt;</code></li>
</ul>
<pre><code class="language-html">&lt;link
  rel="stylesheet"
  href="https://fonts.googleapis.com/css?family=Inter:slnt,wght@-10..0,100..900&amp;display=swap"
/&gt;</code></pre>
<h4>Copy the Amplify CSS to public</h4>
<p>Note that the source is <code>styles.css</code> (plural) and the destination is <code>style.css</code> (singular)</p>
<pre><code>cp node_modules/@aws-amplify/ui/dist/styles.css public/css/style.css</code></pre>
<h2>Update the scaffold Clojurescript code to support Amplify</h2>
<p>Here&#8217;s where we actually get to the actually writing of some code to use the Amplify UI Components in an App.</p>
<p>Edit <code>src/main/amplifystudio_cljs_tutorial/app/core.cljs</code> with the following changes</p>
<h3>Add the dependencies for the <code>require</code></h3>
<p>Add the aws amplify and ui imports to the require so it looks like:</p>
<p>Note that the <code>amplify pull</code> will populate <code>src/amplify/ui-components</code> and the <code>webpack</code> execution described further on, will set things up so the <code>"ui-components/CardACollection"</code> require can be fulfilled.</p>
<pre><code class="language-clojure">(ns amplifystudio-cljs-tutorial.app.core
  (:require [reagent.dom :as rdom]
            ["/aws-exports" :default ^js aws-exports]
            ["aws-amplify" :refer [Amplify] :as amplify]
            ["@aws-amplify/ui-react" :refer [AmplifyProvider]]
            ["ui-components/RentalCollection" :default RentalCollection]))</code></pre>
<h3>Update the <code>app</code> function</h3>
<p>This is the actual initial page code that is run by the render function. It is primarily <a class="keychainify-checked" href="https://github.com/reagent-project/reagent/blob/master/doc/UsingHiccupToDescribeHTML.md">hiccup</a> syntax.</p>
<blockquote><p>Hiccup describes HTML elements and user-defined components as a nested ClojureScript vector.</p>
<ul>
<li>The first element is either a keyword or a symbol
<ul>
<li>If it is a keyword, the element is an HTML element where (name keyword) is the tag of the HTML element.</li>
<li>If it is a symbol, reagent will treat the vector as a component, as described in the next section.</li>
</ul>
</li>
<li>If the second element is a map, it represents the attributes to the element. The attribute map may be omitted.</li>
<li>Any additional elements must either be Hiccup vectors representing child nodes or string literals representing child text nodes.</li>
</ul>
</blockquote>
<p>This code:</p>
<ul>
<li>Displays an <code>h1</code> header</li>
<li>Wraps the <code>RentalCollection</code> we created in Figma / ui-components with the <code>AmplifyProvider</code></li>
</ul>
<p>The <code>:&gt;</code> is a function, <a class="keychainify-checked" href="http://reagent-project.github.io/docs/master/reagent.core.html#var-adapt-react-class">adapt-react-class</a>, that tells hiccup/reagent to interpret the next symbol as a React Component.<br />
More info at: <a class="keychainify-checked" href="https://cljdoc.org/d/reagent/reagent/1.1.0/doc/tutorials/react-features">React Features in Reagent</a></p>
<p>The <code>app</code> function:</p>
<pre><code class="language-clojure">(defn app []
  [:&gt; AmplifyProvider
  [:h1 "Amplify Studio Tutorial"]
   [:&gt; RentalCollection]])</code></pre>
<p>For comparison here is the equivalent Javascript:</p>
<pre><code class="language-jsx">function App() {
  return (
    &lt;AmplifyProvider&gt;
      &lt;RentalCollection /&gt;
    &lt;/AmplifyProvider&gt;
  );
}</code></pre>
<h3>Update the <code>main</code> function</h3>
<p>This function is the first code called in the program. It is where you would put any initialization code and then it calls the render function that kicks of the reagent/react event loop.</p>
<ul>
<li>Add a bit of logging so we can see that we&#8217;re hitting the code at runtime</li>
<li>Add the Amplify initialization code.</li>
</ul>
<pre><code class="language-clojure">(defn ^:export main []
  (js/console.log "main top")
  (-&gt; Amplify (.configure aws-exports))
  (render))</code></pre>
<p>In the Clojurescript statement:</p>
<pre><code>(-&gt; Amplify (.configure aws-exports))</code></pre>
<p><code>-&gt;</code> is the <a class="keychainify-checked" href="https://clojuredocs.org/clojure.core/-%3E">thread-first macro</a>. In this case it means that <code>Amplify</code> will be passed in as the second argument of the following form. I.E. its the equivalent to this Clojurescript statement:</p>
<pre><code>(.configure Amplify aws-exports)</code></pre>
<p>In ether case, it is the Javascript interop equivalent to:</p>
<pre><code>Amplify.configure(config);</code></pre>
<h2>Setup Webpack / Babel</h2>
<h3>Babel config file</h3>
<p>Babel does the work of converting JSX files to Javascript files suitable for consumption by webpack and shadow-cljs. It is called by webpack.</p>
<p>Create the file <code>.babelrc</code> in the top level of the repo with the content:</p>
<pre><code class="language-json">{
  "presets": ["@babel/preset-env", "@babel/preset-react"]
}</code></pre>
<p>This tells babel to run the presets:</p>
<blockquote><p><a class="keychainify-checked" href="https://babeljs.io/docs/en/babel-preset-env">@babel/preset-env</a> is a smart preset that allows you to use the latest JavaScript without needing to micromanage which syntax transforms (and optionally, browser polyfills) are needed by your target environment(s). This both makes your life easier and JavaScript bundles smaller!</p></blockquote>
<p>and</p>
<blockquote><p><a class="keychainify-checked" href="https://babeljs.io/docs/en/babel-preset-react#docsNav">@babel/preset-react</a> loads the following plugins:</p>
<p><a class="keychainify-checked" href="https://babeljs.io/docs/en/babel-plugin-syntax-jsx">@babel/plugin-syntax-jsx</a> &#8211; enables parsing of JSX<br />
<a class="keychainify-checked" href="https://babeljs.io/docs/en/babel-plugin-transform-react-jsx">@babel/plugin-transform-react-jsx</a> &#8211; transform JSX to Javascript<br />
<a class="keychainify-checked" href="https://babeljs.io/docs/en/babel-plugin-transform-react-display-name">@babel/plugin-transform-react-display-name</a> &#8211; Set displayName in the Javascript</p>
<p>And with the development option Classic runtime adds:</p>
<p><a class="keychainify-checked" href="https://babeljs.io/docs/en/babel-plugin-transform-react-jsx-self">@babel/plugin-transform-react-jsx-self</a> &#8211; sets <code>self</code> in the transformed code<br />
<a class="keychainify-checked" href="https://babeljs.io/docs/en/babel-plugin-transform-react-jsx-source">@babel/plugin-transform-react-jsx-source</a> &#8211; injects the source information (file, lineno) into the the Javascript</p></blockquote>
<h3>Webpack configuration file</h3>
<p><a class="keychainify-checked" href="https://webpack.js.org/concepts/">Webpack</a>:</p>
<blockquote><p>At its core, webpack is a static module bundler for modern JavaScript applications. When webpack processes your application, it internally builds a dependency graph from one or more entry points and then combines every module your project needs into one or more bundles, which are static assets to serve your content from.</p></blockquote>
<p>We are using it to convert the JSX <code>ui-component</code> files from Figma/Amplify Studio into vanilla Javascript via babel.</p>
<p>Webpack is also being used to bundle the <code>src/amplify/models</code> and <code>src/amplify/ui-components</code> directories/files that are pulled from amplify into the repo as modules so that their objects can be <code>imported</code> into the app. This is configured in the <code>resolve</code> block below.</p>
<p>This will be a webpack configuration file, <code>webpack.config.js</code> in the top level of the repo. The following will describe the elements we&#8217;re going to use in that file.</p>
<h4>Requires</h4>
<p>The following requires the webpack modules and plugins used</p>
<pre><code class="language-javascript">const path = require("path");
const webpack = require("webpack");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const HtmlBeautifierPlugin = require("html-beautifier-webpack-plugin");</code></pre>
<h4>Basic Webpack config</h4>
<ul>
<li>Set mode to development</li>
<li><code>entry</code> &#8211; The file generated by shadow-cljs describing all the require/imports seen in the code</li>
<li><code>output</code> &#8211; Where webpack should put its final bundle of javascript that will be included by a <code>&lt;script&gt;</code> tag in the index.html</li>
<li><code>devtool</code> &#8211; Tells webpack to generate source maps to be consumed by the browser devtools</li>
</ul>
<pre><code class="language-javascript">module.exports = {
  mode: "development",
  entry: "./target/index.js",
  output: {
    path: path.resolve(__dirname, "public"),
    filename: "js/libs/bundle.js",
    clean: false,
  },
  devtool: "source-map",</code></pre>
<h4>Rules</h4>
<p>This is the main directives that tell weback what to do.</p>
<ul>
<li><code>test: /.m?js/,</code> &#8211; Regex that specifies what file types to apply to the first rule to (ones that end with <code>.mjs</code> or <code>.js</code>)
<ul>
<li><code>fullySpecified: false</code> &#8211; the import / require statements should not end with file suffixes</li>
<li><code>alias</code> &#8211; Maps the path to the javascript files to a module name. This allows the code to require the Amplify Studio <code>models</code> and <code>ui-components</code> as importable modules.</li>
</ul>
</li>
<li><code>test: /.jsx$/</code> &#8211; Regex that specifies which file types to apply to the second rule (JSX files)
<ul>
<li><code>exclude</code> &#8211; Don&#8217;t apply it to files installed by npm in <code>/node_modules/</code></li>
<li><code>use</code> &#8211; Apply babel to the JSX files. The <code>.babelrc</code> file specified earlier tells babel to transform the JSX files to vanilla javascript</li>
</ul>
</li>
</ul>
<pre><code class="language-javascript">    rules: [
      {
        // docs: https://webpack.js.org/configuration/module/#resolvefullyspecified
        test: /.m?js/,
        resolve: {
          fullySpecified: false,
          alias: {
            models: "../src/amplify/models/index.js",
            "ui-components": "../src/amplify/ui-components",
          },
        },
      },
      {
        test: /.jsx$/,
        exclude: /node_modules/,
        use: ["babel-loader"],
      },
    ],</code></pre>
<h4>Plugins</h4>
<p>This is where plugins are loaded.</p>
<ul>
<li><code>process</code> &#8211; This was needed as webpack 5 no longer includes a polyfil for the <code>process</code> Node.js variable. There were some dependencies that required <code>process.env</code></li>
<li><a class="keychainify-checked" href="https://webpack.js.org/plugins/html-webpack-plugin/">HtmlWebpackPlugin</a> &#8211; Enables creating <code>public/index.html</code> from a temlate so that webpack can inject the path to its bundle into the index.html. Also useful if you want to automate the updates of the index.html for other things.</li>
<li><a class="keychainify-checked" href="https://github.com/zamanruhy/html-beautifier-webpack-plugin#readme">HtmlBeautifierPlugin</a> Cleans up the index.html with proper newlines mainly</li>
</ul>
<pre><code class="language-javascript">  plugins: [
    new webpack.ProvidePlugin({
      process: "process/browser",
    }),
    new HtmlWebpackPlugin({
      template: "./public/index.html.tmpl",
      filename: "index.html",
    }),
    new HtmlBeautifierPlugin(),
  ],</code></pre>
<p>The Html plugins / index.html templating are not totally necessary. You could just add your own script tag to index.html instead such as:</p>
<pre><code>&lt;script defer src="js/libs/bundle.js"&gt;&lt;/script&gt;</code></pre>
<h4>The full <code>webpack.config.js</code></h4>
<p>Create a file <code>webpack.config.js</code> also at the top level of the repo with the content:</p>
<pre><code class="language-javascript">const path = require("path");
const webpack = require("webpack");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const HtmlBeautifierPlugin = require("html-beautifier-webpack-plugin");

module.exports = {
  mode: "development",
  entry: "./target/index.js",
  output: {
    path: path.resolve(__dirname, "public"),
    filename: "js/libs/bundle.js",
    clean: false,
  },
  devtool: "source-map",
  module: {
    rules: [
      {
        // docs: https://webpack.js.org/configuration/module/#resolvefullyspecified
        test: /.m?js/,
        resolve: {
          fullySpecified: false,
          alias: {
            models: "../src/amplify/models/index.js",
            "ui-components": "../src/amplify/ui-components",
          },
        },
      },
      {
        test: /.jsx$/,
        exclude: /node_modules/,
        use: ["babel-loader"],
      },
    ],
  },
  resolve: {
    extensions: ["", ".js", ".jsx"],
  },
  plugins: [
    new webpack.ProvidePlugin({
      process: "process/browser",
    }),
    new HtmlWebpackPlugin({
      template: "./public/index.html.tmpl",
      filename: "index.html",
    }),
    new HtmlBeautifierPlugin(),
  ],
};</code></pre>
<h4>Add a script to run webpack</h4>
<p>Add the following line to the <code>”scripts”</code> section of <code>package.json</code>. It will allow you to run a that will update the bundle automatically when you change any of the amplify files or when shadow-cljs updates the <code>target/index.js</code></p>
<pre><code>    "pack": "webpack --watch"</code></pre>
<h3>Update git</h3>
<h4>Update .gitignore</h4>
<p>add the following to <code>.gitignore</code></p>
<pre><code>/target/</code></pre>
<h4>Add all the new files to the commit</h4>
<p>git add -A<br />
git commit -m &#8220;Sync up all the final changes&#8221;</p>
<h2>Running the development service locally</h2>
<p>Start the shadow-cljs watch process. (<code>shadow-cljs watch app</code>) using the npm command:</p>
<pre><code>npm start</code></pre>
<p>And in another terminal window, also at the top of the repo run the webpack watch process:</p>
<pre><code>npm run pack
</code></pre>
<p>Should see something like the following. The images and values are dependent on how you set up the data when following along with the first part of the <a class="keychainify-checked" href="https://welearncode.com/studio-vacation-site/">Build a Vacation Rental Site with Amplify Studio</a>.</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/working-basic-integration.png?ssl=1" alt="Initial Integration View" /></p>
<h2>Troubleshooting</h2>
<p>If when you start the shadow-cljs process, <code>npm start</code>, and you get something like:</p>
<pre><code class="language-bash">...
shadow-cljs - watching build :app
[:app] Configuring build.
[:app] Compiling ...
[2022-01-02 21:55:15.214 - WARNING] :shadow.cljs.devtools.server.util/handle-ex - {:msg {:type :start-autobuild}}
AssertionError Assert failed: (map? rc)
...</code></pre>
<p>The <code>js-provider :external</code> config in <code>shadow-cljs.edn</code> is masking the actual error. In order to see what the error is, comment out the <code>:js-options</code> block in <code>shadow-cljs.edn</code> like:</p>
<pre><code class="language-edn">:builds
 {:app
  {:target     :browser
   :output-dir "public/js"
   :asset-path "/js"
   ;; :js-options {:js-provider    :external
   ;;              :external-index "target/index.js"}
   :modules    {:main
                {:init-fn amplifystudio-cljs-tutorial.app.core/main}}}
</code></pre>
<p>and then run <code>npm start</code> again and see what the error is. Correct the error and then remember to uncomment the <code>:js-options</code> block.</p>
<h2>Completing the Tutorial with Amplify UI Overrides</h2>
<p>We pick back up the original tutorial at the <code>Use a Prop</code> section to show how the UI Components can be customized just with Component Props and runtime Overrides.</p>
<p>Overrides are a powerful feature that are builtin to the Amplify UI Components and allow you to inject attributes into the children of components at runtime. It makes the Amplify UI Components very flexible without having to modify the actual code of the components. This allows you to update the Figma design aspects and still update your local copy of the ui-components with an <code>amplify pull</code> since you don&#8217;t make local changes to that code.</p>
<h3>Use a Prop</h3>
<blockquote><p>You can customize these React components in your own code. First, you can use props in order to modify your components. If you wanted to make your grid of rentals into a list, for example, you could pass the prop type=&#8221;list&#8221; to your RentalCollection.</p></blockquote>
<p>In Javascript you would say:</p>
<pre><code>&lt;RentalCollection type="list" /&gt;</code></pre>
<p>and in Clojurescript:</p>
<pre><code>[:&gt; RentalCollection {:type "list"}]</code></pre>
<h4>And that will make the view go from a grid to a list:</h4>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/collection-as-list.png?ssl=1" alt="Colllection as a list" /></p>
<p>The props are listed for each component type at <a class="keychainify-checked" href="https://ui.docs.amplify.aws/components">Amplify UI Connected Components</a></p>
<h3>Use an Override</h3>
<p>Overrides allow you to inject props into the children of a component.</p>
<p>In our example RentalCollection, the images in the child cards are kind of squashed. To fix that we want to set the <code>objectFit</code> prop of the image element of the card to <code>cover</code>.</p>
<p>In Javascript you would use:</p>
<pre><code class="language-javascript">&lt;RentalCollection
  type="list"
  overrides={{
    "Collection.CardA[0]": {
      overrides: {
        "Flex.Image[0]": { objectFit: "cover" },
      },
    },
  }}
/&gt;</code></pre>
<p>In Clojurescript we use:</p>
<pre><code class="language-clojure">[:&gt; RentalCollection {:type "list"
                      :overrides {"Collection.CardA[0]"
                                  {:overrides {"Flex.Image[0]"
                                               {:object-fit "cover"}}}}}]])</code></pre>
<h4>Now the images are no longer squished:</h4>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2022/01/props-applied-to-children.png?ssl=1" alt="objectFit cover prop applied to children images" /></p>
<h2>Themes and Conclusion</h2>
<p>That completes showing the differences of using Clojurescript instead of Javascript with Amplify Studio and Amplify UI Connected Components.</p>
<p>You can refer back to the original AWS Tutorial <a class="keychainify-checked" href="https://welearncode.com/studio-vacation-site/">Build a Vacation Rental Site with Amplify Studio</a> for the remaining content on how to use the <a class="keychainify-checked" href="https://www.figma.com/community/plugin/1040722185526429545/AWS-Amplify-Theme-Editor">AWS Amplify Theme Editor</a> in Figma to add a theme to the UI Components. This should work without having to change any of your Clojurescript code as you modify the Ui component code that you load via <code>amplify pull</code> via Figma.</p>
<p>It is also possible to <a class="keychainify-checked" href="https://ui.docs.amplify.aws/theming">apply themes directly in your code</a>. Doing that with Clojurescript will be left to a possible future article.</p>
<p>The full project / code for this repo is at <a class="keychainify-checked" href="https://github.com/rberger/amplifystudio-cljs-tutorial">https://github.com/rberger/amplifystudio-cljs-tutorial</a>.</p>
<h2>Feel free to post issues or questions there.</h2><p>The post <a href="https://www.ibd.com/howto/amplify-studio-cljs-tutorial/">Use Amplify Studio Figma Connector with Clojurescript</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1933</post-id>	</item>
		<item>
		<title>Set up SSL/TLS for shadow-cljs https server</title>
		<link>https://www.ibd.com/howto/set-up-ssl-keystore-for-shadow-cljs-2/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Thu, 21 Oct 2021 05:13:34 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[Clojure / Clojurescript]]></category>
		<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<guid isPermaLink="false">https://www.ibd.com/howto/set-up-ssl-keystore-for-shadow-cljs-2/</guid>

					<description><![CDATA[<p>Howto create TLS Server Certificate and use with clojurescript shadow-cljs development server</p>
<p>The post <a href="https://www.ibd.com/howto/set-up-ssl-keystore-for-shadow-cljs-2/">Set up SSL/TLS for shadow-cljs https server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h1>Set Up SSL/TLS HTTPS for shadow-cljs Development Server</h1>
<p>While developing clojurescript web apps, you may require that the development http server (shadow-cljs)[<a href="https://github.com/thheller/shadow-cljs">https://github.com/thheller/shadow-cljs</a>] operate with SSL/TLS to serve up HTTPS, not just HTTP.</p>
<p>This is particuarly true if you need to test things out on an iPhone or Android phone but still run with the development server so you can iterated changes just as quick as when you are working with desktop clients.</p>
<p>Its a bit tricky to get everything lined up to make SSL/TLS work locally as Apple (and I presume other browsers) no longer support self-signed certificates for HTTPS servers. So you need a private CA and a certificate generated from the private CA.</p>
<p>This is a guide to set up:</p>
<ul>
<li>A Private Certificate Authority (CA)</li>
<li>A Server Certificate for your shadow-cljs development server</li>
<li>How to configure shadow-cljs.edn for SSL</li>
<li>How to install the CA Root Certificate on other clients (like an iPhone) so they can access the shadow-cljs servers</li>
</ul>
<p><em><strong>NOTE</strong>: This server / CA / Certificates should never be used in production or in any particularly public way. It’s not secure. We’re doing this to get around the normal browser / server security just for local development.</em></p>
<h2>Install mkcert</h2>
<p>See the following for more info or how to install on Linux: <a href="https://github.com/FiloSottile/mkcert">GitHub &#8211; FiloSottile/mkcert: A simple zero-config tool to make locally trusted development certificates with any names you’d like.</a></p>
<h3>Install mkcert on macOS</h3>
<pre><code>&gt; brew install mkcert
> brew install nss # if you use Firefox</code></pre>
<h2>Create a local CA to be used by mkcert and clients</h2>
<pre><code>&gt; mkcert -install
Created a new local CA &#x1f4a5;
Sudo password:
The local CA is now installed in the system trust store! &#x26a1;
The local CA is now installed in the Firefox trust store (requires browser restart)! &#x1f98a;
The local CA is now installed in Java's trust store! &#x2615;</code></pre>
<h2>Create a pkcs12 certificate</h2>
<p>Easiest to do this in the directory you are running the shadow-cljs project.<br />
Create a subdirectory <code>ssl</code> at the same level as shadow-cljs (top level of the repo usually) and cd into <code>ssl</code></p>
<pre><code>❯ cd ~/work/my-project
❯ ls
Makefile        RELEASE_TAG     bin             dev             package.json    shadow-cljs.edn test
README.org      amplify         deps.edn        node_modules    resources       src             yarn.lock

❯ mkdir ssl
❯ cd ssl</code></pre>
<p>Create the certificate that the shadow-cljs servers will use as their server certificates. You want to specify all the domains and IPs that would be associated with the certificate and the way you will access the server.<br />
In my case my iMac has two interfaces plus localhost. One interface is the Ethernet, the other is the wifi. And just to be safe, I’m putting in their IPv6 addresses as well.</p>
<pre><code>❯ mkcert -pkcs12 discovery.local localhost  192.168.20.10 192.168.20.11 127.0.0.1 ::1 fd95:cb6f:7955:0:1878:b8b5:1b3b:ad27 fd95:cb6f:7955:0:4cd:c922:d1b3:2eb5

Created a new certificate valid for the following names &#x1f4dc;
 - "discovery.local"
 - "localhost"
 - "192.168.20.10"
 - "192.168.20.11"
 - "127.0.0.1"
 - "::1"
 - "fd95:cb6f:7955:0:1878:b8b5:1b3b:ad27"
 - "fd95:cb6f:7955:0:4cd:c922:d1b3:2eb5"

The PKCS#12 bundle is at "./discovery.local+7.p12" &#x2705;

The legacy PKCS#12 encryption password is the often hardcoded default "changeit" &#x2139;

It will expire on 20 January 2024 &#x1f5d3;</code></pre>
<h2>Install the cert into the keystore</h2>
<p><strong><em>NOTE:</em></strong> <em>The passwords  you use here should not be used anywhere else, particularly on public services. They do not have to be super secret, great passwords as they will be in the clear in your shadow-cljs.</em></p>
<p>You will create a local Java JKS Keystore in <code>ssl</code> to be used by shadow-cljs servers</p>
<ul>
<li><code>Destination Password</code>: This will be the password specified in shadow-cljs.edn to gain access to the keystore.  Our example will be <code>super-secret</code></li>
<li><code>Source keystore password</code>: The password that <code>mkcert</code> used to generate the Server Certificate and thus the password of the Server Certificate. I could not find a way to specify it. It defaults to <code>changeit</code>
<pre><code>❯ keytool -importkeystore -destkeystore keystore.jks -srcstoretype PKCS12 -srckeystore discovery.local+7.p12
Importing keystore discovery.local+7.p12 to keystore.jks...
Enter destination keystore password: super-secret
Re-enter new password: super-secret
Enter source keystore password: changeit
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled</code></pre>
</li>
</ul>
<h2>Configure shadow-cljs.edn to enable SSL</h2>
<p>Mainly need to add an <code>:ssl</code> coda to the start of the <code>shadow-cljs.edn</code></p>
<pre><code>{:deps  true
 :nrepl {:port 8777}
 :ssl {:keystore "ssl/keystore.jks"
       :password "retold-fever"}
 :dev-http {8020 {:root "resources/public"}}
 ... rest of your shadow-cljs.edn file...</code></pre>
<p>No need to specify the hostnames. In fact that will limit access to IP addresses that resolve to that name which may be incorrect.<br />
More info on the <code>:ssl</code> configuration at <a href="https://shadow-cljs.github.io/docs/UsersGuide.html#_ssl">Shadow CLJS User’s Guide: SSL</a></p>
<p>[Re]start your shadow-cljs watch process and it should say something like the following at some point in its startup where <code>https</code> is the protocol shown for the http and shadow-cljs servers:</p>
<pre><code>...
shadow-cljs - HTTP server available at https://localhost:8020
shadow-cljs - server version: 2.15.8 running at https://localhost:9631
shadow-cljs - nREPL server started on port 8777
shadow-cljs - watching build :app
...</code></pre>
<p>Assuming you set the certificate to support any other domain names and IP addresses associated with your computer running this, they will also work as the host address in your client URL accessing this server. But only if running on the same machine as this server.</p>
<p>If you want to make another device (like an iPhone or another computer) access this server, follow the next steps.</p>
<h2>Export the Root CA of your Private CA to other Clients</h2>
<p>In order for other machines on your LAN to access the shadow-cljs server running with the Private CA and Server certificate set up in the earlier steps,  you will need to export the Root CA from that machine to these other clients.</p>
<h3>Find the location of the Root Certificate of the Private CA</h3>
<p>When you ran <code>mkcert install</code> it created the root certificates of the Private CA and stashed them somewhere appropriate for your system. You can find out where with the command:</p>
<pre><code>❯ mkcert -CAROOT
/Users/rberger/Library/Application Support/mkcert

❯ ls '/Users/rberger/Library/Application Support/mkcert'
rootCA-key.pem rootCA.pem</code></pre>
<p>You will want to copy the <code>rootCA.pem</code> to other clients that would access the shadow-cljs servers.</p>
<h3>For transferring to other Macs or iOS devices</h3>
<pre><code>open '/Users/rberger/Library/Application Support/mkcert'</code></pre>
<p>Which will open a finder window with the directory where these pem files are:<br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/10/ssl-airdrop-root-cert.png?ssl=1" alt="Root Cert in Finder"/></p>
<p>And then select AirDrop to send them to other macOS or iOS devices<br />
Otherwise you can email it or send the file some other way to a destination device.</p>
<h2>Install the Private CA Root Cert on iOS device</h2>
<ul>
<li>
Once you send the Cert to an iOS device, you will get a message<br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/10/ssl-choose-device.png?ssl=1" alt="Choose Device"/></li>
<li>
Select iPhone  and then select Close:<br />
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/10/ssl-profile-downloaded-close.jpg?ssl=1" alt="Select Close"/></li>
<li>
Go to Settings and you’ll see the a new option <code>Profile Downloaded</code> Click on that and the go thru the rest of the dialogs agreeing to Install the downloaded profile.</li>
</ul>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/10/ssl-settings.jpg?ssl=1" alt="Profile Downloaded in Settings"/></p>
<ul>
<li>After completing all the install dialogs, this client should be ready to connect to the shadow-cljs using https.</li>
</ul><p>The post <a href="https://www.ibd.com/howto/set-up-ssl-keystore-for-shadow-cljs-2/">Set up SSL/TLS for shadow-cljs https server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1854</post-id>	</item>
		<item>
		<title>Accessing AppSync APIs that require Cognito Login outside of Amplify</title>
		<link>https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Wed, 01 Sep 2021 00:36:18 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[AppSync]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Cognito]]></category>
		<guid isPermaLink="false">https://www.ibd.com/howto/access-appsync-outside-amplify-2/</guid>

					<description><![CDATA[<p>Access your AppSync GraphQL APIs that require Cognito Logins with arbitrary tools outside of Amplify Apps</p>
<p>The post <a href="https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/">Accessing AppSync APIs that require Cognito Login outside of Amplify</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>The Need</h2>
<p>You have this great Amplify App using AppSync GraphQL. You eventually find that you need to be able to access that data in your AppSync GraphQL database from tools other than your Amplify App. Its easy if you just have your AppSync API protected just by an API Key. But that isn&#8217;t great security for your data!</p>
<p>One way to protect your AppSync data is to use <a href="https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/#cognito-user-pools">Cognito Identity Pools</a>. Amplify makes it pretty transparent if you are  using Amplify to build your clients. AppSync lets you do really nice <a href="https://docs.aws.amazon.com/appsync/latest/devguide/security-authorization-use-cases.html">table and record level access control based on logins and roles</a>.</p>
<p>What happens if you want to access that data from something other than an Amplify based client? How do you &#8220;login&#8221; and get the JWT credentials you need to access your AppSync APIs?</p>
<h2>Use AWS CLI</h2>
<p>The most general way is to use the AWS CLI to effectively login and retrieve the JWT credentials that can then be passed in the headers of any requests you make to your AppSync APIs.</p>
<p>Unfortunately its not as easy as just having your login and password. It also depends on how you configured your Cognito Identity Pool and its related Client Apps.</p>
<h3>Cognito User Pool Client App</h3>
<p>You can have multiple Client Apps specified for your Cognito User Pool. I suggest  having one dedicated to these external applications. That way you can have custom configuration just for this and not disrupt your main  Amplify apps. Also you can easily turn it off if you need too.</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/08/User-pool-app-clients.png?ssl=1" alt="User Pool Client Apps" title="User Pool Client Apps"/></p>
<p>In my case I created a new client app <code>shoppabdbe800b-rob-test2</code> as a way to test a client app with no <code>App Client Secret</code>. This makes it easier to access from the command line as you do not have to generate a Secret Hash (will describe how to deal with that below).</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/08/app-client-config-no-secret.png?ssl=1" alt="App Client Config with no secret" title="App Client Config with no secret"/></p>
<p>If you want to allow admin level access (ie a user with admin permission) you need to check <code>Enable username password auth for admin APIs for authentication (ALLOW_ADMIN_USER_PASSWORD_AUTH)</code></p>
<p>If you want to allow regular users to login you must also select <code>Enable username password based authentication (ALLOW_USER_PASSWORD_AUTH)</code></p>
<p>The defaults for the other fields should be ok. Be sure to save your changes.</p>
<h3>Minimal IAM permissions</h3>
<p>As far as I can tell, these are the minimal IAM permissions to make the aws <code>cognito-idp</code> command work for admin and regular users of AppSync (replace the Resource arn with the arn of the user pool[s] you want to control):</p>
<pre><code>{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cognito-idp:AdminInitiateAuth",
                "cognito-idp:AdminGetUser"
            ],
            "Resource": "arn:aws:cognito-idp:us-east-1:XXXXXXXXXXXXX:userpool/us-east-1_XXXXXXXXX"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "cognito-idp:GetUser",
                "cognito-idp:InitiateAuth"
            ],
            "Resource": "*"
        }
    ]
}</code></pre>
<h3>Get the Credentials with no App Client Secret</h3>
<p>This example is if you did not set the App Client Secret.</p>
<p>You should now be able to get the JWT credentials from the AWS CLI.</p>
<p>This assumes you have<a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html"> set up your</a> <code>~/.aws/credentials</code> file or whatever is appropriate for your command line environment so that you have the permissions to access this service.</p>
<ul>
<li>When using the <code>ADMIN_USER_PASSWORD_AUTH</code></li>
</ul>
<pre><code>aws cognito-idp admin-initiate-auth --user-pool-id us-east-1_XXXXXXXXXX --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username1,PASSWORD=XXXXXXXXXXXXX &gt; creds.json</code></pre>
<ul>
<li>When using the <code>USER_PASSWORD_AUTH</code></li>
</ul>
<pre><code>aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username2,PASSWORD=XXXXXXXXXXXX &gt; creds.json</code></pre>
<p>Of course replace the <code>XXXX</code>&#8216;s with the actual values.</p>
<ul>
<li><code>user-pool-id</code> &#8211; The pool id found at the top of the <em>User Pool Client Apps</em> page</li>
<li><code>client-id</code> &#8211; The <code>client-id</code> of the <code>app client</code> you are using</li>
<li><code>USERNAME</code> &#8211; The Username normally used to login to your Amplify app</li>
<li><code>PASSWORD</code> &#8211; The Password normally used to login to your Amplify app</li>
</ul>
<p>The results will be in <code>creds.json</code>. (You could not use the <code>&gt; creds.json</code> if you want to just see the results)</p>
<h3>Get the Credentials when there is an App Client Secret</h3>
<p>This assumes you have an App Client that has an <code>app secret key</code> set.</p>
<p>The main thing here is you need to generate a <code>secret hash</code> to send along with the command.</p>
<p>You can do that by creating a little python program to generate it for you when you need it:</p>
<pre><code class="language-python3">#!/usr/bin/env python3

import sys
import hmac, hashlib, base64

if (len(sys.argv) == 4):
    username = sys.argv[1]
    app_client_id = sys.argv[2]
    key = sys.argv[3]
    message = bytes(sys.argv[1]+sys.argv[2],'utf-8')
    key = bytes(sys.argv[3],'utf-8')
    secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()

    print("SECRET HASH:",secret_hash)
else:
    (print("len sys.argv: ", len(sys.argv)))
    print("usage: ",  sys.argv[0], " &lt;username&gt; &lt;app_client_id&gt; &lt;app_client_secret&gt;")</code></pre>
<p>Save the file someplace that you can execute it from like <code>~/bin/app-client-secret-hash</code> and make it executable (<code>chmod a+x ~/bin/app-client-secret-hash</code>).</p>
<p>You will need:</p>
<ul>
<li><code>app-client-id</code> &#8211; The <code>client-id</code> of the <code>app client</code> you are using</li>
<li><code>app-client-secret</code> &#8211; The secret of the <code>app client</code> you are using (its on the App Client page of the User Pool)</li>
<li><code>USERNAME</code> &#8211; The Username normally used to login to your Amplify app</li>
</ul>
<p>To use:</p>
<pre><code>~/bin/app-client-secret-hash  &lt;username&gt; &lt;app_client_id&gt; &lt;app_client_secret&gt;</code></pre>
<p>Where of  course you replace the arguments with the actual values.</p>
<p>The result is a <code>secret-hash</code> you will use in the following command to get the actual JWT credentials</p>
<pre><code>aws cognito-idp admin-initiate-auth --user-pool-id us-east-1_XXXXXXXXXX --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username3,PASSWORD='secret password',SECRET_HASH='secret-hash' &gt; creds.json</code></pre>
<p>You could do the same thing with <code>USER_PASSWORD_AUTH</code> if you nee that instead</p>
<pre><code>aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=rob+admin,PASSWORD=XXXXXXXXX,SECRET_HASH='secret-hash' &gt; creds.json</code></pre>
<h2>Using the Credentials</h2>
<p>How you use these credentials depends on what tool or  how you are trying to access your AppSync APIs.</p>
<h3>From some Javascript</h3>
<p>You can just add in the <code>IdToken</code> from the <code>creds.json</code> as an <code>Authorization</code> header when you build the request:</p>
<pre><code class="language-javascript">function graphQLFetcher(graphQLParams) {
  const APPSYNC_API_URL = "TYPE_YOUR_APPSYNC_URL";
  const credentialsAppSync = {
    Authorization: "eyJraWQiOiI1dVUwMld...",
  };
  return fetch(APPSYNC_API_URL, {
    method: "post",
    headers: {
      Accept: "application/json",
      "Content-Type": "application/json",
      ...credentialsAppSync,
    },
    body: JSON.stringify(graphQLParams),
    credentials: "omit",
  }).then(function (response) {
    return response.json().catch(function () {
      return response.text();
    });
  });
}</code></pre>
<p>If you are using some GraphQL tool that needs to access your AppSync APIs. The tool should have a way that you can supply the token and it will add it as an <code>Authorization</code> header for its own requests.</p>
<p>Do let me know if you have some examples of tools that would make use of this.</p>
<h2>References</h2>
<ul>
<li><a href="https://aws.amazon.com/blogs/mobile/appsync-graphiql-local/" title="Explore AWS AppSync APIs with GraphiQL from your local machine">Explore AWS AppSync APIs with GraphiQL from your local machine</a></li>
<li>[How do I troubleshoot &#8220;Unable to verify secret hash for client <client-id>&#8221; errors from my Amazon Cognito user pools API?](<a href="https://aws.amazon.com/premiumsupport/knowledge-center/cognito-unable-to-verify-secret-hash/">https://aws.amazon.com/premiumsupport/knowledge-center/cognito-unable-to-verify-secret-hash/</a> &#8220;How do I troubleshoot &#8220;Unable to verify secret hash for client </client-id><client-id>&#8221; errors from my Amazon Cognito user pools API?&#8221;)</client-id></li>
</ul><p>The post <a href="https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/">Accessing AppSync APIs that require Cognito Login outside of Amplify</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1803</post-id>	</item>
		<item>
		<title>Blogging Once More</title>
		<link>https://www.ibd.com/howto/first-git-blog-post/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Sat, 07 Aug 2021 07:59:51 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<guid isPermaLink="false">https://www.ibd.com/howto/first-git-blog-post/</guid>

					<description><![CDATA[<p>Time to start blogging, so of course, have to spend days tweaking up the blog and the blog process before writing anything!</p>
<p>The post <a href="https://www.ibd.com/howto/first-git-blog-post/">Blogging Once More</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Its Time to Blog Again!</h2>
<p>I&#8217;ve been itching to do some blogging about various projects I&#8217;ve been working on for both <a href="https://www.visx.live">work</a> and personal. In particular since I had the honor of being invited to be part of the <a href="https://aws.amazon.com/developer/community/community-builders/">AWS Community Builders</a> I need to up my blogging game.</p>
<h2>Joining the POSSE</h2>
<p>One goal is to do it in the <a href="https://indieweb.org/POSSE"><em>Publish (on your) Own Site, Syndicate Elsewhere</em> POSSE</a> style. I.E. publish on my personal blog but then have it (hopefully) automatically syndicated to other sites like <a href="https://medium.com/me/stories/drafts">Medium</a>, <a href="https://dev.to/rberger">Dev.to</a>, <a href="https://hashnode.com/@rberger">Hashnode</a>, etc. I.e. Write Once, Publish Everywhere.</p>
<h2>Markdown, Emacs and Github at the Source</h2>
<p>And I want to write it in Markdown with Emacs and have the authoritative source in Github. What more is there to say?</p>
<h2>WordPress as the Personal Website</h2>
<p>My personal website is based on WordPress running on AWS lightsail. I have a love / hate relationship with WordPress. Its one of those technologies that are powerful because so many people use it. And I have to help other folks with their WordPress setups, so I want to keep my finger in it. I&#8217;ve tried the various static sites and its been at least so far, worth keeping it there.</p>
<p>Unfortunately, WordPress is also the most painful to work with Markdown and have content come from Github.</p>
<h3>Git it Write makes it possible</h3>
<p>I have found what looks like a good solution: <a href="https://wordpress.org/plugins/git-it-write/">Git it Write</a></p>
<p>It is a WordPress plugin that allows you to connect a github repo to your WordPress instance. It uses a webhook so that everytime you update a specified branch of a github repo, it will push the markdown and images from the repo into the WordPress as a Post, Page Ad, Reusable Block, or Attachment. It uses YAML frontmatter in the markdown source to control some of the meta info for the post.</p>
<p>You organize the file hierarchy in the repo to match the Permalink hierarcy of your website. In my case, my default permalink is a custom <code>/%category%/%postname%/</code> <img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/08/permalink-settings.png?ssl=1" alt="Permalink Settings" title="Permalink Settings" />. And the file layout is like this:</p>
<pre><code>.
├── LICENSE
├── README.md
├── _images
│   ├── permalink-settings.png
│   └── san-juan-mountains.jpg
└── posts
    ├── anti-ageing
    ├── blogging
    │   └── first-git-blog-post.md
    ├── how-the-world-works
    │   ├── creating_the_future_of_abundance
    │   └── demand_transformation
    ├── howto
    ├── macintosh
    ├── robotics-2
    ├── scalable-deployment
    ├── sysadmin
    ├── telecom
    └── uncategorized</code></pre>
<p>Right now I only have the one article that you are reading now <code>first-get-blog-post.md</code> but I put in all the other categories I already had in my blog from before as directories as placeholders.</p>
<p>Also notice the <code>_images</code> directory. Unfortunately, you have to put all the images you use in any post in this one top level <code>_images</code> directory. So you have to make the filenames unique across posts and its a shame in terms of keeping things organized. But the good news is the plugin takes care of geting the images into WordPress.</p>
<p>You refer to the image in our Markdown like:</p>
<pre><code>![Permalink Settings](/_images/permalink-settings.png "Permalink Settings")</code></pre>
<p>The setup of the plugin is pretty easy:</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/08/git-it-write-settings.png?ssl=1" alt="Git it write top level settings" title="Git it Write top level settings" /></p>
<p>Just click on the <code>+ Add a new repository to publish posts from</code> and fill in the info about your repo:</p>
<p><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/08/git-it-write-repo-settings.png?ssl=1" alt="Repo settings" title="Git it Write Repo Settings" /></p>
<p>You can set a subdirectory in the repo if you want to carve up things like Posts, Pages, etc in the same repo.</p>
<p><strong>NOTE:</strong> the <code>_image_</code> directory is still at the top of the repo and shared across them all.</p>
<p>The documenation for <em>Git it Write</em> is at <a href="https://www.aakashweb.com/docs/git-it-write/">https://www.aakashweb.com/docs/git-it-write/</a></p>
<p>In any case, it seems like a pretty nice solution for now to allow me to write in markdown and make Git the authoritative source for posts. Which means I can use the git repo to push to other sites like Medium and Dev.to. My experiences attempting that will hopefully be in a future post.</p><p>The post <a href="https://www.ibd.com/howto/first-git-blog-post/">Blogging Once More</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1769</post-id>	</item>
		<item>
		<title>First Post with Spacemacs and org2blog</title>
		<link>https://www.ibd.com/blogging/first-post-with-spacemacs-and-org2blog/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Sat, 26 Jan 2019 07:02:00 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[blogging]]></category>
		<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<category><![CDATA[emacs]]></category>
		<category><![CDATA[spacemacs]]></category>
		<guid isPermaLink="false">https://blog.ibd.com/?p=1674</guid>

					<description><![CDATA[<p>Haven&#8217;t been blogging for quite a while. I recently got around to redeploying and updating my WordPress blog to AWS Lightsail. It works so much better (and less expensive) now! The Dev team here at Omnyway wanted to start blogging on our company WordPress blog about all the cool Clojure stuff we&#8217;re open sourcing. But they all like to use&#8230;</p>
<p>The post <a href="https://www.ibd.com/blogging/first-post-with-spacemacs-and-org2blog/">First Post with Spacemacs and org2blog</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Haven&#8217;t been blogging for quite a while. I recently got around to redeploying and updating my WordPress blog to <a href="https://aws.amazon.com/lightsail/">AWS Lightsail</a>. It works so much better (and less expensive) now!</p>
<p>The Dev team here at <a href="https://www.omnyway.com">Omnyway</a> wanted to start blogging on our company WordPress blog about all the cool <a href="https://github.com/omnyay-labs">Clojure stuff we&#8217;re open sourcing</a>. But they all like to use <a href="https://orgmode.org/">Emacs Org Mode</a> for their writing.</p>
<p>I had to figure out how to go from Emacs org mode to WordPress, so I am first trying it here on my personal blog. Since I use Spacemacs (in holy mode of course) as my form of Emacs, I wanted to add it as a Layer.</p>
<p>Spacemacs prefers you to have layers that combine packages and configuration. I could have just added it to <code>dotspacemacs-additional-packages</code> in the spacemacs init file but I wanted to try doing it as a layer. I&#8217;ve tried (unsuccessfully) to create my own layers before, but I was determined to get it to work this time! Turns out there isn&#8217;t really that much of an advantage of using a layer in this case other than being able to pull it from github dynamically.</p>
<div id="outline-container-org8bf76fa" class="outline-2">
<h2 id="org8bf76fa">Creating a Layer</h2>
<div class="outline-text-2" id="text-org8bf76fa"></div>
<div id="outline-container-orgdf43ca5" class="outline-3">
<h3 id="orgdf43ca5">Use Spacemacs to make a private layer template</h3>
<div class="outline-text-3" id="text-orgdf43ca5">
 You don&#8217;t need to do this but it helps to have a scaffold template. While inside Spacemacs:</p>
<pre class="example">&lt;META-x&gt; configuration-layer/create-layer
</pre>
<p>This will prompt you for a directory where your private layers go (mine was <code>~/.spacemacs.d/layers</code>). Then it will prompt you for the name of the layer. In this case <code>org2blog</code>.</p>
<p>It will then create a <code>README.org</code> and a <code>packages.el</code> with some things filled in. The README will be used for the help for the package.</p>
</div>
</div>
<div id="outline-container-org5f03f04" class="outline-3">
<h3 id="org5f03f04">Update the scaffold files to do the actual work</h3>
<div class="outline-text-3" id="text-org5f03f04">
 I updated the <code>packages.el</code> to the following (removed most of the boilerplate below)</p>
<div class="org-src-container">
<pre class="src src-emacs-lisp"><span style="color: #2aa1ae; background-color: #292e34;">;;; </span><span style="color: #2aa1ae; background-color: #292e34;">packages.el --- org2blog layer packages file for Spacemacs.</span>

<span style="color: #4f97d7;">(</span><span style="color: #4f97d7; font-weight: bold;">setq</span> org2blog-packages
  '<span style="color: #bc6ec5;">(</span><span style="color: #2d9574;">(</span>org2blog <span style="color: #4f97d7;">:location</span> <span style="color: #67b11d;">(</span>recipe <span style="color: #4f97d7;">:fetcher</span> github
                                <span style="color: #4f97d7;">:repo</span> <span style="color: #2d9574;">"org2blog/org2blog"</span><span style="color: #67b11d;">)</span><span style="color: #2d9574;">)</span><span style="color: #bc6ec5;">)</span><span style="color: #4f97d7;">)</span>

<span style="color: #4f97d7;">(</span><span style="color: #4f97d7; font-weight: bold;">defun</span> <span style="color: #bc6ec5; font-weight: bold;">org2blog/init-org2blog</span> <span style="color: #bc6ec5;">()</span>

  <span style="color: #bc6ec5;">(</span><span style="color: #4f97d7; font-weight: bold;">use-package</span> <span style="color: #a45bad;">org2blog</span><span style="color: #bc6ec5;">)</span>
  <span style="color: #2aa1ae; background-color: #292e34;">;</span><span style="color: #2aa1ae; background-color: #292e34;">(require 'org2blog-autoloads)</span>
  <span style="color: #4f97d7;">)</span>
<span style="color: #2aa1ae; background-color: #292e34;">;;; </span><span style="color: #2aa1ae; background-color: #292e34;">packages.el ends here</span>
</pre>
</div>
<p>The first statement <code>setq org2blog-packages</code> will pull the package from github and the second statement <code>org2blog/init-org2blog</code> will be used to initialize the package when its lazily loaded.</p>
<p>Then needed to add the layer (<code>org2blog</code>) and its basic config to the spacemacs init file (<code>~/.spacemacs</code> in my case).</p>
<p>The config specifies a list of blogs you can log into. This example shows only one. A more complicated config could go into the <code>dotspacemacs/user-config</code> section of the spacemacs init file instead if you prefer.</p>
<div class="org-src-container">
<pre class="src src-emacs-lisp">dotspacemacs-configuration-layers
 '<span style="color: #4f97d7;">(</span>
   <span style="color: #bc6ec5;">(</span>org2blog <span style="color: #4f97d7;">:variables</span>
     org2blog/wp-blog-alist '<span style="color: #2d9574;">(</span><span style="color: #67b11d;">(</span><span style="color: #2d9574;">"my.blog.com"</span>
                               <span style="color: #4f97d7;">:url</span> <span style="color: #2d9574;">"https://my.blog.com/xmlrpc.php"</span>
                               <span style="color: #4f97d7;">:username</span> <span style="color: #2d9574;">"joe"</span><span style="color: #67b11d;">)</span><span style="color: #2d9574;">)</span><span style="color: #bc6ec5;">)</span>
    <span style="color: #2aa1ae; background-color: #292e34;">;; </span><span style="color: #2aa1ae; background-color: #292e34;">... additional layers                               </span>
  <span style="color: #4f97d7;">)</span>
</pre>
</div>
<p>Once you have that all set, restart Spacemacs.</p>
</div>
</div>
</div>
<div id="outline-container-org2b96ec4" class="outline-2">
<h2 id="org2b96ec4">Taking it for a spin</h2>
<div class="outline-text-2" id="text-org2b96ec4">
 After you restart Spacemacs (and hopefully get no errors). Issue the command:</p>
<pre class="example">META-x org2blog/wp-login
</pre>
<p>It will let you select which blog (in this case <code>my.blog.com</code>) and will ask you for the password.</p>
<p>Then command:</p>
<pre class="example">META-x org2blog/wp-new-entry
</pre>
<p>At that point you can start writing your post! It will have put a few headers in that you could fill in at the top.</p>
<p>When you are ready to push it to your WordPress blog just incantate one of the following:</p>
</div>
<div id="outline-container-orgebfb8b1" class="outline-3">
<h3 id="orgebfb8b1">Publishing Keybindings</h3>
<div class="outline-text-3" id="text-orgebfb8b1">
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left"/>
<col class="org-left"/>
<col class="org-left"/>
</colgroup>
<tbody>
<tr>
<td class="org-left">post buffer as draft</td>
<td class="org-left"><b>C-c M-p d</b></td>
<td class="org-left"><b>M-x     org2blog/wp-post-buffer</b></td>
</tr>
<tr>
<td class="org-left">publish buffer</td>
<td class="org-left"><b>C-c M-p p</b></td>
<td class="org-left"><b>C-u M-x org2blog/wp-post-buffer</b></td>
</tr>
<tr>
<td class="org-left">post buffer as page draft</td>
<td class="org-left"><b>C-c M-p D</b></td>
<td class="org-left"><b>M-x     org2blog/wp-post-buffer-as-page</b></td>
</tr>
<tr>
<td class="org-left">publish buffer as page</td>
<td class="org-left"><b>C-c M-p P</b></td>
<td class="org-left"><b>C-u M-x org2blog/wp-post-buffer-as-page</b></td>
</tr>
</tbody>
</table>
</div>
</div>
</div><p>The post <a href="https://www.ibd.com/blogging/first-post-with-spacemacs-and-org2blog/">First Post with Spacemacs and org2blog</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1674</post-id>	</item>
		<item>
		<title>First they came for the Whistleblowers, and I did not speak out</title>
		<link>https://www.ibd.com/how-the-world-works/first-they-came-for-the-whistleblowers-and-i-did-not-speak-out/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Sun, 06 Jul 2014 00:41:12 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[Demand Transformation]]></category>
		<category><![CDATA[How the World Works]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<category><![CDATA[Democracy]]></category>
		<category><![CDATA[Kleptocracy]]></category>
		<category><![CDATA[NSA]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=1628</guid>

					<description><![CDATA[<p>First they came for the Whistleblowers, and I did not speak out— Because I was not a Whistleblower. Then they came for the Boing Boing Readers, and I did not speak out— Because I was not a  Boing Boing Reader. Then they came for the Linux Users, and I did not speak out— Because I was not a Linux User.&#8230;</p>
<p>The post <a href="https://www.ibd.com/how-the-world-works/first-they-came-for-the-whistleblowers-and-i-did-not-speak-out/">First they came for the Whistleblowers, and I did not speak out</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" fetchpriority="high" decoding="async" class="alignnone wp-image-1632 size-full" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2014/07/nsa-logo.png?resize=290%2C290" alt="nsa-logo" width="290" height="290" />First they <a href="http://www.pbs.org/wgbh/pages/frontline/united-states-of-secrets/" target="_blank" rel="noopener">came for the Whistleblowers</a>, and I did not speak out—<br />
Because I was not a Whistleblower.</p>
<p>Then they <a href="http://boingboing.net/2014/07/03/if-you-read-boing-boing-the-n.html" target="_blank" rel="noopener">came for the Boing Boing Readers</a>, and I did not speak out—<br />
Because I was not a  Boing Boing Reader.</p>
<p>Then they <a href="http://www.techspot.com/news/57316-nsa-classifies-linux-journal-readers-tor-and-tails-linux-users-as-extremists.html" target="_blank" rel="noopener">came for the Linux Users</a>, and I did not speak out—<br />
Because I was not a Linux User.</p>
<p>Then they came for <a href="http://www.wnd.com/2013/08/nsa-crushes-free-speech-on-t-shirts/" target="_blank" rel="noopener">people who mocked the NSA</a>, and I did not speak out—<br />
Because I was not mocking the NSA.</p>
<p>Then they came for the Jews (they always eventually come for the Jews<a href="http://www.jewishjournal.com/rob_eshman/article/nsa_and_jew"> even when Jews think they are mainstream</a>), and I did not speak out—<br />
Because I was not a Jew.</p>
<p>Then they came for me—and there was no one left to speak for me.</p><p>The post <a href="https://www.ibd.com/how-the-world-works/first-they-came-for-the-whistleblowers-and-i-did-not-speak-out/">First they came for the Whistleblowers, and I did not speak out</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1628</post-id>	</item>
		<item>
		<title>CLI to Switch Amazon AWS Shell Environment Credentials</title>
		<link>https://www.ibd.com/howto/cli-to-switch-amazon-aws-shell-environment-credentials/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 16 Jun 2014 04:54:11 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Bash]]></category>
		<category><![CDATA[CLI]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=1616</guid>

					<description><![CDATA[<p>I work with many different AWS IAM Accounts and need to easily switch between these accounts. The good news is the AWS CLI tools now support a standard config file (~/.aws/config) that allows you to create profiles  for  multiple accounts in the one config file. You can select them when using the aws-cli with the --profile flag. But many other&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/cli-to-switch-amazon-aws-shell-environment-credentials/">CLI to Switch Amazon AWS Shell Environment Credentials</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><a href="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles" target="_blank" rel="noopener"><img data-recalc-dims="1" decoding="async" class="alignleft wp-image-1625 size-full" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2014/06/AwsCreds.png?resize=300%2C300" alt="AwsCreds" width="300" height="300" /></a>I work with many different AWS IAM Accounts and need to easily switch between these accounts. The good news is the AWS CLI tools now support a standard config file (<code>~/.aws/config</code>) that allows you to create profiles  for  multiple accounts in the one config file. You can select them when using the <code>aws-cli</code> with the <code>--profile</code> flag.</p>
<p>But many other tools don&#8217;t yet support the new format config file or multi-profiles. But they do support shell environment variables. So I wrote a simple ruby script that</p>
<ul>
<li>Allows you to specify the profile name as an argument</li>
<li>Reads in the config file ~/.aws/config</li>
<li>Outputs the export statements for publishing the environment variables
<ul>
<li>You can eval the output to set the environment of your current shell session</li>
</ul>
</li>
</ul>
<p>So if you had a config file ~/.aws/config that looked like this:</p>
<pre><pre class="brush: plain; light: false; title: ~/.aws/config; notranslate">
&#x5B;default]
aws_access_key_id=AKI***********2A
aws_secret_access_key=jt41************************************p
region=us-east-1

&#x5B;profile foo]
aws_access_key_id=0K***************K82
aws_secret_access_key=2b+***********************************1g
region=us-east-1

&#x5B;profile bar]
aws_access_key_id=AKI**************GA
aws_secret_access_key=MG************************************/d
region=us-east-1
</pre>
<p>If you don&#8217;t specify any argument to the command it will output the default profile:</p>
<pre><pre class="brush: bash; title: ; notranslate">
 $ aws_switch
export AWS_ACCESS_KEY_ID=AKI***********2A
export AWS_SECRET_ACCESS_KEY=jt41************************************p
export AMAZON_ACCESS_KEY_ID=AKI***********2A
export AMAZON_SECRET_ACCESS_KEY=jt41************************************p
export AWS_ACCESS_KEY=AKI***********2A
export AWS_SECRET_KEY=jt41************************************p
</pre>
<p>If you specified a profile (in this case <code>foo</code>):</p>
<pre><pre class="brush: bash; title: ; notranslate">
$ aws_switch foo
export AWS_ACCESS_KEY_ID=0K***************K82
export AWS_SECRET_ACCESS_KEY=2b+***********************************1g
export AMAZON_ACCESS_KEY_ID=0K***************K82
export AMAZON_SECRET_ACCESS_KEY=2b+***********************************1g
export AWS_ACCESS_KEY=0K***************K82
export AWS_SECRET_KEY=2b+***********************************1g
</pre>
<p>You would actually use it by eval&#8217;ing the output of <code>aws_switch</code> so it sets the variables in the environment of yhour current shell:</p>
<pre><pre class="brush: bash; title: ; notranslate">
eval `aws_switch foo`
</pre>
<p>Here&#8217;s the code for <code>aws_switch</code>. Put it in someplace in your <code>$PATH</code> and make sure to <code>chmod 0755</code> the file so its executable:</p>
<pre><pre class="brush: ruby; light: false; title: aws_switch; notranslate">
#!/usr/bin/env ruby
require 'inifile'

configs = IniFile.load(File.join(File.expand_path('~'), '.aws', 'config'))

profile_name_input = ARGV&#x5B;0]
case profile_name_input
when 'default'
  profile_name = 'default'
when nil
  profile_name = 'default'
when &quot;&quot;
  profile_name = 'default'
else
  profile_name = &quot;profile #{profile_name_input}&quot;
end

id = configs&#x5B;profile_name]&#x5B;'aws_access_key_id']
key = configs&#x5B;profile_name]&#x5B;'aws_secret_access_key']

puts &quot;export AWS_ACCESS_KEY_ID=#{id}&quot;
puts &quot;export AWS_SECRET_ACCESS_KEY=#{key}&quot;
puts &quot;export AMAZON_ACCESS_KEY_ID=#{id}&quot;
puts &quot;export AMAZON_SECRET_ACCESS_KEY=#{key}&quot;
puts &quot;export AWS_ACCESS_KEY=#{id}&quot;
puts &quot;export AWS_SECRET_KEY=#{key}&quot;
</pre><p>The post <a href="https://www.ibd.com/howto/cli-to-switch-amazon-aws-shell-environment-credentials/">CLI to Switch Amazon AWS Shell Environment Credentials</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1616</post-id>	</item>
		<item>
		<title>Message to My Senator Boxer: Uphold your Oath of Office: Squash Shadow Secret Gov&#8217;t</title>
		<link>https://www.ibd.com/how-the-world-works/demand_transformation/message-to-my-senator-boxer-uphold-your-oath-of-office-squash-shadow-secret-govt/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Sat, 13 Jul 2013 21:22:18 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[Demand Transformation]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<category><![CDATA[Constiution]]></category>
		<category><![CDATA[Politics]]></category>
		<category><![CDATA[Prism]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=1599</guid>

					<description><![CDATA[<p>Just a reminder you swore an oath of office to uphold the Constitution but you are not upholding it if you are not investigating and prosecuting those in all branches of the government that have been sidestepping the Constitution. The 13 years of Cheney/Obama administrations has set up a shadow secret government with its own courts, laws and interpretations of&#8230;</p>
<p>The post <a href="https://www.ibd.com/how-the-world-works/demand_transformation/message-to-my-senator-boxer-uphold-your-oath-of-office-squash-shadow-secret-govt/">Message to My Senator Boxer: Uphold your Oath of Office: Squash Shadow Secret Gov’t</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p style="text-align: center;"><a href="http://irregulartimes.com/keeptheoathofofficebutton.html" target="_blank" rel="noopener"><img data-recalc-dims="1" decoding="async" class="aligncenter wp-image-1603 size-full" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2013/07/keeptheoathbuttonthumb.png?resize=680%2C680" alt="Remind your elected officials from the President on down to keep their oath of office: to Preserve, Protect and Defend the Constitution of the United States of America." width="680" height="680" /><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft  wp-image-1602" style="border: 1px solid black; margin: 5px;" src="https://i0.wp.com/blog2.ibd.com/wp-content/uploads/2013/07/BoxerSwearingIn1.jpg?resize=140%2C143" alt="Senate Oath of Office" width="140" height="143" /></a>Just a reminder you swore an oath of office to uphold the Constitution but you are not upholding it if you are not investigating and prosecuting those in all branches of the government that have been sidestepping the Constitution.</p>
<p>The 13 years of Cheney/Obama administrations has set up a shadow secret government with its own courts, laws and interpretations of the public laws.</p>
<p><a href="Arrest and Try James Clapper for Lying to Congress"><img data-recalc-dims="1" loading="lazy" decoding="async" class=" wp-image-1600 alignright" style="border: 1px solid black; margin: 5px;" src="https://i0.wp.com/blog2.ibd.com/wp-content/uploads/2013/07/clapper_not_swearing.jpeg?resize=167%2C94" alt="Arrest and Try James Clapper for Lying to Congress" width="167" height="94" /></a>Members of at least the current administration have now been <a href="http://www.webpronews.com/intelligence-director-james-clapper-admits-that-he-lied-to-congress-is-sorry-2013-07" target="_blank" rel="noopener">proven</a> to have <a href="http://www.wyden.senate.gov/news/blog/post/wyden-and-udall-to-general-alexander-nsa-must-correct-inaccurate-statement-in-fact-sheet" target="_blank" rel="noopener">lied</a> <a href="http://www.webpronews.com/nsa-chief-denies-existence-of-domestic-spying-program-2012-03" target="_blank" rel="noopener">under</a> oath in <a href="http://blogs.chicagotribune.com/news_columnists_ezorn/2013/06/fire-james-clapper-then-put-him-in-the-dock.html" target="_blank" rel="noopener">front</a> of <a href="http://takingnote.blogs.nytimes.com/2013/06/11/making-alberto-gonzales-look-good/" target="_blank" rel="noopener">congressional hearing</a>.</p>
<p>But nothing has been done. Instead the whistleblowers are in jail or forced to seek asylum in other nations and the real criminals are in the seats of the White House, Congress, the Judiciary and the Media.</p>
<p>I&#8217;m not even bothering to write to Diane &#8220;NSA&#8221; Feinstein but I for some reason still have hope that maybe you have a bone of integrity left in your body.</p>
<p>You are being called to get out of your comfort zone and safeguard our Democracy before its too late (though it may be too late already, especially if you and at least some colleagues don&#8217;t do something about it)</p><p>The post <a href="https://www.ibd.com/how-the-world-works/demand_transformation/message-to-my-senator-boxer-uphold-your-oath-of-office-squash-shadow-secret-govt/">Message to My Senator Boxer: Uphold your Oath of Office: Squash Shadow Secret Gov’t</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1599</post-id>	</item>
		<item>
		<title>Data Wants to be Free (as in Freedom)</title>
		<link>https://www.ibd.com/how-the-world-works/data-wants-to-be-free-as-in-freedom/</link>
					<comments>https://www.ibd.com/how-the-world-works/data-wants-to-be-free-as-in-freedom/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Tue, 09 Apr 2013 09:44:56 +0000</pubDate>
				<category><![CDATA[Author-Berger]]></category>
		<category><![CDATA[Creating The Future of Abundance]]></category>
		<category><![CDATA[Demand Transformation]]></category>
		<category><![CDATA[How the World Works]]></category>
		<category><![CDATA[Abundance]]></category>
		<category><![CDATA[Author_Berger]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Medical]]></category>
		<category><![CDATA[Privacy]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=1587</guid>

					<description><![CDATA[<p>Tonight I was at the Sensored Meetup #10: Data! APIs! Standards!. Besides some great thought provoking talks, there was some great discussions afterward that got me thinking more clearly about some stuff that has been bubbling in my brain. Scott McNealy Was Right &#8211; Privacy: &#8216;Get Over It&#8217; I really thought McNealy was wrong when he said way back in 1999 that consumer privacy issues&#8230;</p>
<p>The post <a href="https://www.ibd.com/how-the-world-works/data-wants-to-be-free-as-in-freedom/">Data Wants to be Free (as in Freedom)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Tonight I was at the <a title="Sensored Meetup #10: Data! APIs! Standards!" href="http://www.meetup.com/Sensored/events/108060432/" target="_blank" rel="noopener">Sensored Meetup #10: Data! APIs! Standards!</a>. Besides some great thought provoking talks, there was some great discussions afterward that got me thinking more clearly about some stuff that has been bubbling in my brain.</p>
<h2>Scott McNealy Was Right &#8211; Privacy: &#8216;Get Over It&#8217;</h2>
<p><a href="http://blog2.ibd.com/privacy-policy/attachment/mcnealy-sml/" rel="attachment wp-att-795"><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft wp-image-795 size-full" style="margin: 5px;" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/01/mcnealy.sml_.jpg?resize=100%2C112" alt="Scot McNealy" width="100" height="112" /></a>I really thought McNealy was wrong when<a title="Sun on Privacy: 'Get Over It'" href="http://www.wired.com/politics/law/news/1999/01/17538" target="_blank" rel="noopener"> he said way back in 1999</a> that consumer privacy issues are a &#8220;red herring.&#8221; &#8220;You have zero privacy anyway,&#8221; But today in conversation with one of the participants, <a href="http://www.meetup.com/Sensored/members/70353622/?a=viewBioRsvpList_control2" target="_blank" rel="noopener">Antoine Lizee</a>, about how can we get people&#8217;s  Medical data available to medical researchers I realized that McNealy was correct.</p>
<p>We both felt that if medical sensor and other data from  millions of people could be made available in some open source form to researchers, huge breakthroughs in medical science would quickly emerge just from modern data mining, machine learning and statistical processing. Of course the issue of privacy came up almost immediately. But from his experience and from recent news that even anonymized DNA sequences can be traced back to an individual&#8217;s identity. So even with anonymization, its almost impossible to completely protect an individual&#8217;s identity in light of modern big data techniques.</p>
<p>I wondered, &#8220;Actually, what are people real fears about their Medical Data getting out?&#8221;. Is it any different than 5 or ten years ago when people said<a title="Dilbert 1996" href="http://dilbert.com/strips/comic/1996-01-11/" target="_blank" rel="noopener"> they would NEVER use a credit card on the Internet</a></p>
<figure style="width: 1200px" class="wp-caption alignnone"><img loading="lazy" decoding="async" src="https://assets.amuniversal.com/9ba847b09fd4012f2fe600163e41dd5b" alt="I would never buy something over the Internet" width="1200" height="369" /><figcaption class="wp-caption-text">Dilbert by Scott Adams January 11, 1996</figcaption></figure>
<p>How long and what would it take to have a similar change in mass mentality that would allow folks to not mind that their medical data might be use &#8220;on the Internet&#8221;?</p>
<h2>Big Data Processing of  Medical Sensors: A Solution to Rising Health Care Costs</h2>
<p>I have long believed that if the data collected from medical sensors and the burgeoning world of the <a title="Self Knowledge Through Numbers" href="http://quantifiedself.com/" target="_blank" rel="noopener">Quantified Self</a> could be aggregated and made available to researchers (<a title="Crowdsourced coders take on immunology Big Data" href="http://blogs.nature.com/news/2013/02/crowd-sourced-coders-take-on-immunology-big-data.html" target="_blank" rel="noopener">and not just</a> <a title="Audrey de Grey, Computer Scientist dedicated to ending Aging as we know it" href="http://en.wikipedia.org/wiki/Aubrey_de_Grey" target="_blank" rel="noopener">&#8220;medical researchers&#8221;</a>) we would enter a new golden era of medical breakthrough and real cures for major illnesses.</p>
<p>With sample sizes of MILLIONs instead of the 10 to 100 people in most modern medical studies, just using statistical processing on the billions of data samples  patterns of health and illness will practically just apear. That alone would make it worthwhile for us to do it, the government (or insurance companies) to fund it and for individuals to feel there would be a value to allow their data to be aggregated. Even if it meant that their data may leak out.</p>
<p>And using similar Big Data techniques used today to sell more stuff on the Internet (<a title="Runa Big Data Real Time Processing" href="http://www.runa.com/products/technology/" target="_blank" rel="noopener">like we did at my last company Runa</a>), we could map some of those discoveries of patterns back to the real time processing of individual&#8217;s sensor data to let them know if their personal real time data stream indicates they are about to have a heart attack or something.</p>
<h2>Just Do It</h2>
<p><a href="http://www.gadgetwiki.com/20120406/nike-fuelband-perfect-sports-companion/" rel="attachment wp-att-1592"><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft wp-image-1592 size-full" style="margin: 5px;" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2013/04/nike-fuel-band-launched_small.png?resize=240%2C110" alt="" width="240" height="110" /></a>My conclusion was that we need to break the logjam and start some projects that demonstrate how powerful it will be to do open Big Data medical research using aggregated data. One way would be to get companies with silos of Quantified Self and similar data to make it available (with permission from the individuals) to open medical research. I&#8217;m sure there are other short term ways the community can come up with to show that this kind of research can have huge positive results.</p>
<h2>Legal Protections are More Viable than Technical Protections</h2>
<p>Mechanisms, that can be publicly audited,  should be made to make the data a anonymized as possible but as mentioned earlier, the nature of medical data is inherently personally identifiable, especially if drawn from multiple sources and linked with other publicly available personal info (aka Facebook and the like).</p>
<p>There can be huge benefits of allowing at least some explicit linkage of personal data to the person. The primary one would be to allow the processor of the data to notify the individual if they found patterns that would indicate a medical problem or would predict a high probability of a future medical problem.<img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft size-full wp-image-1588" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2013/04/rustychains.jpg?resize=149%2C203" alt="DNA Privacy" width="149" height="203" /></p>
<p>So we need the aggregators and users of this huge pool of data to be responsible and we need to make sure individuals don&#8217;t have to worry about discrimination or other negative impacts of their medical info leaking out.</p>
<p>This is much more a legal issue than a technical one. There already is <a title="Genetic Discrimination" href="http://www.genome.gov/10002077" target="_blank" rel="noopener">The Genetic Information Nondiscrimination Act of 2008</a> and related laws that <em>protects Americans from discrimination based on their genetic information in both health insurance (Title I) and employment (Title II)</em>. Just as the laws and policies of banks limits the risk of using your Credit Card on the Internet makes people much more comfortable, we need appropriate laws and corporate policies to allow people to feel comfortable sharing their medical and personal sensor data as well.</p>
<p>So if we could implement as much technical and legal means as possible combined with the huge individual and collective win of using machine learning and statistical processing on the huge corpus of personal &amp; medical data that is already being collected by individuals, we could come up with major new cures and solutions to age old health problems and solve the US core economic problem (Health Care Costs) in one very low cost way.</p><p>The post <a href="https://www.ibd.com/how-the-world-works/data-wants-to-be-free-as-in-freedom/">Data Wants to be Free (as in Freedom)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/how-the-world-works/data-wants-to-be-free-as-in-freedom/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1587</post-id>	</item>
	</channel>
</rss>
