Freddy Kristiansen does it once again – solution for the problems you didn’t even realize you had 🙂
I have had some trouble keeping dataset and RDL of my AL reports in sync. This may seem really obvious if you know the trick, but I could not find clear documentation for the process.
When designing AL reports, if you make a change to dataset in report object, such as adding or removing dataitems or columns, you need to make sure your RDL file is in sync. Adding the columns or dataitems to RDL by manually editing XML is a sure-fire recipe for breaking your report.
The hint toward the solution is in the “Creating an RDL Layout Report” article.
When you build your initial report object, you should use Build action in Visual Studio Code (CTRL+SHIFT+B) to automatically generate RDL file from the report.
For this to work you must specify following properties on the report:
DefaultLayout = RDLC; RDLCLayout = 'MyRDLCReport.rdl';
Note that RDLCLayout property is a path within the current project. Your path may actually be something like this:
RDLCLayout = './src/ReportLayout/MyRDLCReport.rdl';
Once you save your report and Build it, the RDL file will be automatically generated in the appropriate folder.
This leads us to solution: if you make any changes to dataitems/columns in your report file, you should save your report and use Build (CTRL+SHIFT+B), and only then go and edit RDL file using SQL Server Report Builder. Build action will automatically update your RDL file with new dataitems/columns.
You may have noticed that, since mid-March 2019, partners can no longer raise technical support tickets on PartnerSource. If you check the banner link on PartnerSource, you will be sent to this page which advises we should be using support.microsoft.com – which is not actually the way to go, as it requires credit card or prepaid incidents.
Alternative approach, if you are a partner and member of “Ready To Go!” programme, is to raise the tickets via Collaborate Feedback. However, tickets I personally raised through Collaborate/”Ready to go” program take very long time to get any kind of feedback, and were mostly closed without resolution, with Microsoft leaving message “you should raise tech support ticket”.
Feedback on this from Microsoft is they are in the process (!) of setting up proper support for D365BC. Until this is sorted, the process seems to be as described below. This should result in raising tech support ticket that does not require credit card or subscription – which is what we need when reporting bugs in base product.
To start, log in to Partner Center and access Customers list.
Select one of the customers to open Service Management page. Now scroll down to the bottom of the page and select New Request.
From the available options pick Office 365/Dynamics 365:
This will open Office support portal, and will by default open the “Need help?” dialog on the right (1). You can also open same dialog by clicking on Support section on the left (2):
Type the details of your issue in the box and your support request will be created. This is very important: Make sure you include words “Dynamics 365 Business Central” in the text so it gets routed to appropriate team at Microsoft!
Was this article useful to you? Let me know in the comments!
I was reminded (via comments on this issue) there is still some confusion about limitations and process of deploying per-tenant extensions to more than one tenant.
Dmitry Katson wrote about this issue back in January, as part of his tips and tricks post, but I would like to share how I have approached this from the process point of view. For this purpose, I have created a simple demo app, which you can find on my GitHub.
Much of the following will be quite obvious if you are experienced with GitHub, but if you are coming from Dynamics NAV world like I did, I hope it will be of some interest.
If we deploy a per-tenant app (e.g. extension built in 50000-99999 object range) to PROD tenant, then we can’t deploy the app with same name and version to any other tenants in the same region, unless Package ID (which is a GUID value) is exactly the same.
Every compile of app from VS Code will create new Package ID – this will happen when using F5 to deploy from VS Code, or building using CTRL+SHIFT+F5.
Not being able to see this Package ID from BC, and as it was too easy to overwrite the APP file recompiling, developers would have often incremented app versions to deal with this. With Spring release we can now use Inspect functionality (CTRL+ALT+F1) on Extensions page to see Package ID, but it is still too easy to overwrite the app file.
We can use Get-NAVAppInfo with -Path switch to read Package ID of the initial compiled app:
After using CTRL+SHIFT+B to recompile the app:
Get-NAVAppInfo is now showing different PackageID:
Unless we are very careful, and always commit and push the changes after each compile, we have now lost the version of file we had deployed on client’s tenant.
This is, of course, not much of an issue if we only develop our per-tenant app for a single tenant, but if we intend to install the same solution to multiple tenants in the region, we will have to increase version for each one.
Or will we?
We can use Release functionality on GitHub to keep track of our releases, and source code. This allows us to add ZIP file which contains APP file to GitHub, creates a Release, and associates it with current state of source code at the time of release.
Here is the Repository after initialization and 1st commit:
On the main page of repository, I will click on Releases. Because this is the first time I do this in this repository, I will click Create New Release.
In the page that opens, we will fill in the details as highlighted below, attach zipped APP file, then click “Publish release”. Note that APP file must be zipped, as GitHub does not recognize the .APP format.
After clicking Publish Release, following page will open. Few points of interest:
- Here we can see Commit ID. Clicking here will take us to Commits page, and we can see that Release is now linked to our last commit “1st release”.
- We can always see when we are looking at latest release
- Release always has Source Code (zipped and tar/gzipped) from the related commit attached
Remember to delete ZIP file from our project folder – as we have uploaded it to GitHub, there is no reason to keep it in the project too.
Final step is to communicate to other developers, consultants (and possibly clients) link to latest release, to use when deploying to PROD, or for testing in local Docker.
As we continue working on our app, we will now always be able to retrieve this specific release to deploy to client’s tenants.
Once changes in app are ready for PROD deployment, we will:
- Build and do final test on our app
- Increase version in app.json and turn off ShowMyCode
- Create final APP file
- Commit and push all of the changes to GitHub
- Zip the final APP file
- Create new release, and attach zipped file
I hope you find this article useful, and that it saves you at least some of the headaches I had faced dealing with per-tenant extensions.
Every few years I run into INCONSISTENT error at one client or another. Usually, these are signs that there has been some “hacky” coding done that messes with standard, and produces unbalanced journal when posting.
Few days ago I ran into same issue, but this time it was combination of long-used add-on (banking solution, in use 10+ years) and localisation functionality (withholding tax) that users only now started using.
As INCONSISTENT error only pops up once posting is complete, this was quite difficult to debug and get to the bottom of.
You could, for example, turn on the debugger, go through the entire process line by line, and take note of G/L Entries that are posted in the process.
Or, you could set debugger breakpoint just before INCONSISTENT error, and use it to get the data you need from SQL Server.
This is exactly what I did. I placed a breakpoint prior to INCONSISTENT error, started debugger, and then posted the journal.
I then hopped over to SQL Server, and wrote a simple query, shown below. I have used similar solution before to check on status of long-running updates, such as Adjust Cost – Item Entries job which historically has been know to go into infinite loops.
Transaction level Read Uncommitted will let you see all entries in the database, even uncommitted ones. While breakpoint holds the code open, you can grab the data as it would be when posted and analyze it to your heart’s content.
- Never use this in Production systems. The table in question is locked while you are playing on the SQL server, and no posting will be possible!
- In most cases, Posting Preview function will give you same results, without the risk of locking. Unfortunately, this was on version of NAV before Preview Posting was introduced.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED GO SELECT TOP (1000) [timestamp] ,[Entry No_] ,[G_L Account No_] ,[Posting Date] ,[Document Type] ,[Document No_] ,[Description] ,[Bal_ Account No_] ,[Amount] ,[Global Dimension 1 Code] ,[Global Dimension 2 Code] ,[User ID] ,[Source Code] ,[System-Created Entry] FROM [DatabaseName].[dbo].[CompanyName$G_L Entry] ORDER BY [Entry No_] DESC
Been a long time since last post, not because I was lazy but because there was too much to do 🙂
I had some fun trying to get Text FlowField to show on NAV 2016 Activity page.
Per NAV documentation, Cues are supposed to support Integer, Decimal and Text fields as of NAV 2015. Funny thing is, there are no examples of Text fields on standard NAV cues in 2016. Not a single one.
So when I tried to add one, I got a very useless stack image with count of 0 and text field title.
Fast forward an hour or so of trying various things.
The trick is to add the Text field OUTSIDE of CueGroup. It needs to be at the same level as CueGroup, otherwise NAV assumes it needs to show as Stack which is a bit useless.
NB: This is a rewritten version of earlier article.
You could write:
You should write:
Or even better:
The above is what I wrote before I went on leave.
Funnily enough, after getting back from my leave I ran into an issue that directly relates and which made me rewrite this blog post.
SETFILTER will, in most cases, be able to replicate SETRANGE functionality.
However, is there are reserved characters within the filter text these will be interpreted differently if you use:
SETFILTER(FieldName, ‘%1’, txtFilter);
In first case, txtFilter will be parsed as filter text. This will cause runtime errors if txtFilter contains something like “(” character.
In second case filter will be applied without parsing it. This replicates SETRANGE functionality, or using filter ‘txtFilter’ in filtering dialog in user interface.