Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I am executing stored procedure using JDBC driver. So when I have an output field of type CHAR, VARCHAR or VARBINARY with precision 8000. After executing the SP using JDBC API, it returns only 4000 Chars. Need to know if there is anything I missed in execution. The query is simple
EXEC SCHEMA_NAME.STORED_PROC_NAME @c_varchar = ?, @c_varchar2=?
In above example c_varchar is input field and c_varchar2 is in/out type field. c_varchar2 has 8000 precision set still the output which i get is 4000 length even though the procedure is designed to return 8000 characters. Please help me rectify this issue.
Below is the code snippet:
CallableStatement callStmt = sqlConnection.prepareCall("EXEC dbo.proc_name @c_varchar =? , @c_varchar2= ?" );
callStmt.setString(1, "2345");
callStmt.setNull(2, Types.VARCHAR);
callStmt.registerOutParameter(2, Types.VARCHAR);
callStmt.execute();
Procedure design :
CREATE PROC dbo.proc_name
@c_varchar varchar(8000),
@c_varchar2 varchar(8000) OUT
AS
BEGIN
SELECT
@c_varchar2= c_varchar2
FROM dbo.char_SP
WHERE c_varchar = @c_varchar;
END;
Solved! Go to Solution.
Try adding
`sendStringParametersAsUnicode=false` to your JDBC connection string.
This ensures parameters are sent as `VARCHAR` instead of `NVARCHAR`
Great it worked.
you should ensure that, for any operations involving Unicode content, you either continue using the corresponding Unicode-specific JDBC methods (e.g., setNString) or otherwise verify that the default code page conversion does not corrupt data.
Easiest way is to verify unicode data manually in WH and see i
the impact.
please accept this answer if this is helpful
This is the code I did to get sql connection from driver:
Driver version is : mssql-jdbc-12.2.0.jre8.jar
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection sqlConnection = DriverManager.getConnection(prepareJDBCUrl(jdbcConnectString, databaseName), info);
public static String prepareJDBCUrl(String jdbcConnectionString, String databaseName) {
StringBuilder sb = new StringBuilder();
return sb.append("jdbc:sqlserver://")
.append(jdbcConnectionString)
.append(":1433;database=")
.append(databaseName)
.append(";encrypt=true;authentication=")
.append("ActiveDirectoryServicePrincipal")
.toString();
}
Try adding
`sendStringParametersAsUnicode=false` to your JDBC connection string.
This ensures parameters are sent as `VARCHAR` instead of `NVARCHAR`
this worked, thanks.
But what is the impact of this, because we use Fabric Warehouse tables and stored procedures, there will be tables which contains Unicode characters in column names and data will also can have unicode chars.
Great it worked.
you should ensure that, for any operations involving Unicode content, you either continue using the corresponding Unicode-specific JDBC methods (e.g., setNString) or otherwise verify that the default code page conversion does not corrupt data.
Easiest way is to verify unicode data manually in WH and see i
the impact.
please accept this answer if this is helpful
Hey @amokhalgay
Problem seems to be with JDBC drivers may treat `VARCHAR` parameters as Unicode (`NVARCHAR`) by default, causing truncation at 4000 characters (since `NVARCHAR(4000)` uses 8000 bytes).
before we jump to solution , please share how are you using jdbc driver in your case
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
5 | |
3 | |
3 | |
3 | |
2 |